Cg fragment/vertex shaders in Houdini

   24322   30   5
User Avatar
Member
252 posts
Joined: 7月 2005
Offline
Good point. Perhaps the math is the same whether it goes through the chip or on the CPU… That would be a good question for Nvidia and for the MI folks.

But then again, who needs render farms if the card speeds your rendering up so darn much?

-Craig
User Avatar
Member
46 posts
Joined: 7月 2005
Offline
craiglhoffman

But then again, who needs render farms if the card speeds your rendering up so darn much?

This won't happen in the next years. The graphics-cards break in with their performance, when it comes to high-precission (32-bit-floating point) calculations. I know this, because I have implemented a fluid-solver on a GeforceFX using nVidia's Cg during my diploma-thesis. The performance of the CPU-variant was better than the optimized GPU-system. A friend of mine wrote a path-tracer. He implemented the same renderer for the GPU and for Pentium4 using SSE. He also couldn't verify, that the GPU is faster than a CPU. Even worse was the performance of a hybrid-tracer, which should utilize the best of both worlds. But the communication between the CPU and GPU was such a great bottleneck, that this version showed the worst performance of all three variants.
User Avatar
Member
4140 posts
Joined: 7月 2005
Offline
Yah, the camp that believes “we'll be doing all of our final renders on graphics cards” camp does not count me in their member list. I think that you can get some purty piccies from the latest/greatest cards, but they'll never catch up with software rendering simply because they're both moving targets. Really it's just all about moving certain calculations away from the cpu - and it's in the best interest of the graphics card companies to suggest they're better than a renderer on a cpu. That's simply not true(I believe at Siggie NVidia was boasting about an entire film like Monsters Inc or some such being done on Nvidia cards - whatta load) - it's all just number crunching after all, and when the basic algorithms change as new techniques come up, the hardwired graphics card solutions will be less useful. I think their strength lies on being truly wonderful for previs as you're working. Some of it really looks great. Someday the cheap bastards here will spring for some more current hardware and…oh wait, that's me…*cough*…

Cheers,

J.C.
John Coldrick
User Avatar
Member
301 posts
Joined: 7月 2005
Offline
I understand that one of the weakness of current hardware shaders are that lightsource does not benefit from the them. I also believe that there is only a limited number of lightsource that can be handled by the hardware, unlike software rendere.

One of the important benefit of software rendering, be it Mantra, Mentalray or RenderMan are their support for tweaking the lights and having large number of light source.

Until the lightsource are address, I believe the software renderer do have an advantage.

Cheers

Nicholas Yue
Nicholas Yue
User Avatar
Member
12544 posts
Joined: 7月 2005
Online
My opinion is that the for the short term future, hardware rendering will not be directly comparable to software rendering; probably with the most likely scenario being hardware-assisted rendering to boost some common low and high level intensive rendering tasks. (mental ray can render shadowmaps with openGL, am I right?) But really the power of OpenGL 2.0 and cg is for us is the added power for previsualisation. To do a certain amount of lighting design is really essential to good previs and the faster we can all get feedback the better. I've seen some nice previs setups for shows using the Maya/cg shaders and I've got no doubt that Houdini will benefit hugely from OpenGL 2.0 support in the future. It seems to me that support for OpenGL 2.0 is a wiser decision than jumping on cg right now because ultimately it is not married to NVidia. AFAIK, only the Wildcat VP has released alpha support for OpenGL 2.0, right?

Cheers,
Jason
Jason Iversen, Technology Supervisor & FX Pipeline/R+D Lead @ Weta FX
also, http://www.odforce.net [www.odforce.net]
User Avatar
Member
4140 posts
Joined: 7月 2005
Offline
Yup, I'd agree with that, although I'm not sure hardware rendering will ever be on top - it's simply too hardwired (duh ). Software gives you the ultimate freedom to code clever tricks any way you want without concern for a hardwired pipeline.

I'll admit I'm concerned for the future of NVidia in our biz since obviously they have a political motive for pushing Cg support, and less so for OpenGL2. I too am looking forward to OGL2!

Cheers,

J.C.
John Coldrick
User Avatar
Member
252 posts
Joined: 7月 2005
Offline
But is it so hardwired? I was under the impression that the whole point of the new hardware was programmibility so that it wasn't so hardwired as it has been in the past.

I mean things like vector math (dot products, cross products) and depth sorting algorithms and texture filtering, etc. and other things that are so prevalent in CG rendering should run much faster on the graphics hardware than doing it on a CPU I would think.

Sure, there are non-standard things that won't be a lot faster perhaps, but isn't the point to move the standard complex vector things to optimized hardware that deals much better with vector math operations than a CPU that is designed to handle everything?

The shaders that are being written for OGL 2.0 look remarkably similar to Renderman shaders, so I think this is really the future for our industry (at least certain parts of it).

http://www.extremetech.com/article2/0,3973,1154426,00.asp [extremetech.com]

-Craig
User Avatar
Member
4140 posts
Joined: 7月 2005
Offline
Well, it's all just number crunching, right? There's some hardware that's hardwired to be particularly fast at doing a certain type of operation, such as managing z-buffers, for instance. That's great, and being able to offload some of that from the cpu is great. I'm not against it or anything, I'm impressed with it. I'm just saying that, in the end, you can program the hardware any way you want but by definition it's designed to do certain kinds of operations that are *currently* common for graphics. With software-only you can devise whatever you want - you're not restricted to a given pipeline, no matter how useful it may be. It's all just cpus, really, the main system cpu or the chips on the graphics card. It's the graphics hardware companies that are pushing this notion that hardware can “do it all” and in realtime. That's just to sell more units. Software-only is always pushing the boundaries…

Like I said, I'm all for it - I love the advances being made. I just have trouble with some of the marketing types and those that believe them.

Cheers,

J.C.
John Coldrick
User Avatar
Member
252 posts
Joined: 7月 2005
Offline
You're right of course, but the majority of stuff being generated today isn't cutting edge.

GPU's are getting faster at a much higher rate than CPU's currently and it points to a future where a majority of Rendering that is being done today can be done in real time, or at least much much faster than on CPU's. People still watch “Toy Story”, and that could probably be completely hardware rendered with today's graphics cards (once an OpenGL 2.0 standard is finalized).

We will always need software rendering, especially in the Visual Effects world, but I am very excited at the prospect of doing acceptable quality renderings for visual development, Pre-Vis, SHOP shader development, or even for doing final images for a Kids straight-to-video animated movie on a laptop while sitting on the beach at Bali.

I realize that this won't be as much use to most Houdini folks since they tend to be more “Cutting Edge” types who like to do things that others can't do and are focussed on high quality film output, so mainstream hardware rendering doesn't appeal as much to them, but with the new character tools and pipeline issues in Version 6, Houdini could be in a perfect position for some guy doing the next “Veggie Tales” in his garage (or a loosely knit web of folks all over the world passing OTL's back and forth). Real time rendering would be a big boon to him.

We aren't there yet, but the future looks real rosy to me (a guy who dreams of doing his own “Veggie Tales” {in concept, not in quality} cheaply and quickly someday).

-Craig
User Avatar
Member
1192 posts
Joined: 7月 2005
Offline
craiglhoffman
…but I am very excited at the prospect of doing acceptable quality renderings for visual development, Pre-Vis, SHOP shader development, or even for doing final images for a Kids straight-to-video animated movie on a laptop while sitting on the beach at Bali.
…but with the new character tools and pipeline issues in Version 6, Houdini could be in a perfect position for some guy doing the next “Veggie Tales” in his garage (or a loosely knit web of folks all over the world passing OTL's back and forth). Real time rendering would be a big boon to him.
-Craig

Just look at how fast Kaydara Motionbuilder can be, using just real-time rendering. And the output you get is quite OK. I think this could bring a huge gain to Houdini, in terms of workflow efficiency. Houdini can “feel” slow sometimes, when dealing with characters.

Dragos
Dragos Stefan
producer + director @ www.dsg.ro
www.dragosstefan.ro
User Avatar
Member
1192 posts
Joined: 7月 2005
Offline
Just read this…
http://www.3dlabs.com/whatsnew/pressreleases/pr03/03-10-29-opengllinux.htm [3dlabs.com]

Dragos
Dragos Stefan
producer + director @ www.dsg.ro
www.dragosstefan.ro
  • Quick Links