GPU versus CPU for rendering

   10173   3   1
User Avatar
Member
13 posts
Joined: April 2012
Offline
I posted this at the end of another thread in the technical forum but it's probably better to ask here.

When it comes to Mantra PBR IPR and Mantra PBR renders what makes the most difference when it comes to rendering speed? GPU or CPU? Or both?

Is there a good explanation somewhere about how offline rendering and IPR work behind the scenes when it comes to one's hardware? I've read the “understanding mantra” section in the help docs. But it doesn't help me with practical choices like upgrading hardware.

I'm now looking at upgrading my Nvidia Quadro FX3800 but as a newcomer I don't understand enough about how the rendering AND IPR work in Houdini 12 to make an informed decision on where to go with an upgrade.

For what it's worth in my other life with Vray for Maya all of my renders use HDR environment maps. I focus more on small numbers of highly detailed objects versus big environments in my renders. But in Maya I also spend a lot of time tweaking shaders in Vray RT.

So I guess I'm looking for advice what hardware will best allow me to continue that process efficiently in Houdini.

And by the way, I love Houdini so far. It's a blast to use. I'm trying to figure out if I can move my entire workflow over from Maya easily.

Thanks for any advice!
User Avatar
Member
1390 posts
Joined: July 2005
Offline
Mantra does not use GPU at any stage of rendering currently. Also IPR is happening entirely on CPU. Saying that, Houdini uses GPU a lot, for all its GUI rendering, viewports of course, and image display (mplay likes lots of vram!). GPU can assist Houdini in smoke simulations though, in which case memory factor is more important than a number of cuda/opencl cores. Most people tend to choose recent gforce cards (like 580), for better performance/price ratio. Unfortunately this doesn't bring a lot for gpu simulations as 1.5GB of ram is too low to make interesting sims. From a rendering perspective difference between 580 and quadro4000 is close to none as long as you can live without the stability insurance provided by Quadro for a quite amount of money. I recently carefully profiled 560 and 580 as a alternatives for our old Quadro cards. They play nicely, albeit the biggest surprise was a serious regression in XSI viewport performance comparing Quadro 3600 and GF580. XSI stayed on Quadros. Houdini will get new gforces.
User Avatar
Member
13 posts
Joined: April 2012
Offline
Thank you SYmek. That's extremely helpful information. I hope someday rendering can move onto GPUs since they're so much easier to upgrade. For now you've definitely helped me with the decision to stay on my FX3800.

But it's also great to hear about your experience with the GTX580.
User Avatar
Member
1390 posts
Joined: July 2005
Offline
gronk33
Thank you SYmek. That's extremely helpful information. I hope someday rendering can move onto GPUs since they're so much easier to upgrade. For now you've definitely helped me with the decision to stay on my FX3800.
But it's also great to hear about your experience with the GTX580.

I have rather opposite feelings, but perhaps it's just my preference. For me all that gpu hip is a waste of time and resources. There are more cons than pros in gpu computing. Instead of single class of a processor, you have two, instead of single compiler/libraries you have two (and at least three vendors with their fluctuations of APIs), instead of one general enough memory model, you have two devices heavily communicating through bottleneck of PCI and two different paradigm of programming. If Intel wouldn't made a deal with nvidia, they would already have a strong vectorized processor (larabee), that allows for a similar performance in cases where GPU processing shines. It could be 5x slower. Doesn't matter, considering amount of hassle GPU programming brings to the table, no one would bother for such a progress. GPU would stay great for realtime graphics as it should using both OpenGL and OpenCL alike (which is to say that general purpose GPU is great, but not for general purpose programming )
  • Quick Links