PhysX, CUDA, OptiX, DirectCompute, OpenCL?

   21562   9   2
User Avatar
Member
203 posts
Joined:
Offline
I think I've read somewhere before that the answer is no, but I'd like to make sure. Does Houdini support GPU acceleration\utilization through PhysX, CUDA, or another means?
Edited by - Nov. 4, 2009 00:13:48
Most of My Houdini Renders [flickr.com]
My System [evga.com]
User Avatar
Member
1631 posts
Joined: July 2005
Offline
You're right. Houdini doesn't support anything you mentioned.

Cheers!
steven
User Avatar
Member
203 posts
Joined:
Offline
Dang, o well. Thanks!
Most of My Houdini Renders [flickr.com]
My System [evga.com]
User Avatar
Member
219 posts
Joined: May 2008
Offline
Are there any plans of implementing PhysX? It looks very good in XSI.
User Avatar
Member
203 posts
Joined:
Offline
Yea, I'm aware I posted this a while back, but even if we couldn't get PhysX in, is anyone interested in using CUDA? Especially for renders? I'm currently enrolled in a Computer Science AP course, so maybe at the end of this school year I'll be much closer to being able to design a plug-in.
Anyway, if anyone is in fact interested, here are links to the CUDA and PhysX developing sections.
CUDA: http://www.nvidia.com/object/cuda_learn.html [nvidia.com]
PhysX: http://developer.nvidia.com/object/physx.html [developer.nvidia.com]

EDIT: Or perhaps with the advent of DirectX 11, we could try DirectCompute, as I would have to assume that would work on both NVIDIA and ATI cards.

EDIT2: As of now I would have to think that rendering would be the easiest thing to use CUDA or DirectCompute on due to it's already parallel nature. If it can be done on 2,4,8, etc. cores, why not 128, 216, 240, etc.! Obviously each core is slower than that of a CPU, but with 240, I don't think it would matter much, perhaps just different bucket sizes. I do have to wonder what kind of memory footprint this would have. Tho I guess not too much more than a game right? Or wrong due to complexity of rendering compared to a game? I'm kinda just trying to open stuff for discussion, as I don't know too much right now.
Also, does OpenCL support GPU cores?
Most of My Houdini Renders [flickr.com]
My System [evga.com]
User Avatar
Member
12479 posts
Joined: July 2005
Online
Perhaps the Bullet Physics solver with the CUDA optimizations might be interesting?

http://bulletphysics.org/wordpress/?p=64 [bulletphysics.org]
http://docs.google.com/present/view?skipauth=true&id=dcphzzkx_1076cnwxq7gd [docs.google.com]
http://forums.odforce.net/index.php?/forum/58-opensource-bullet-physics-rbd-dop-solver/ [forums.odforce.net]
Jason Iversen, Technology Supervisor & FX Pipeline/R+D Lead @ Weta FX
also, http://www.odforce.net [www.odforce.net]
User Avatar
Member
203 posts
Joined:
Offline
Hm, okay. I had heard a slight bit about this bullet solver, but was unaware it included any CUDA. Thanks for the links.
Most of My Houdini Renders [flickr.com]
My System [evga.com]
User Avatar
Staff
5161 posts
Joined: July 2005
Offline
heydabop
EDIT: Or perhaps with the advent of DirectX 11, we could try DirectCompute, as I would have to assume that would work on both NVIDIA and ATI cards.

Well, that would limit the accelerated feature to Windows, which is not very desirable. OpenCL is a better way to go, but it's just being released as beta for Windows & Linux by ATI and Nvidia. OSX 10.6 has it natively built in.

EDIT2: As of now I would have to think that rendering would be the easiest thing to use CUDA or DirectCompute on due to it's already parallel nature. If it can be done on 2,4,8, etc. cores, why not 128, 216, 240, etc.! Obviously each core is slower than that of a CPU, but with 240, I don't think it would matter much, perhaps just different bucket sizes. I do have to wonder what kind of memory footprint this would have. Tho I guess not too much more than a game right? Or wrong due to complexity of rendering compared to a game? I'm kinda just trying to open stuff for discussion, as I don't know too much right now.

Rendering uses a lot of raytracing, which is very divergent. GPUs do not like divergent code paths, and tend to slow down considerably when this happens. Plus, the entire rendering engine would need to be ported to be really effective. Viewport rendering, on the other hand, might benefit from OpenCL, in terms of improving visual quality.

Also, does OpenCL support GPU cores?

Yes. OSX 10.6 supports both GPU and CPU cores. The AMD GPU driver is supposed to support GPU and x86 CPU cores, apparently. Don't know about Nvidia, though (I can't see that they'd have much to gain from targeting CPUs, but they might - I know the most recent CUDA libs can, so it's possible their OpenCL lib may as well).
User Avatar
Member
203 posts
Joined:
Offline
Alright, well it looks like I first have a lot to learn before I try anything. Although right now I would have to guess that OpenCL would be the way to go. And I need to learn what divergent code is.
Most of My Houdini Renders [flickr.com]
My System [evga.com]
User Avatar
Member
203 posts
Joined:
Offline
Hm, this could prove to be helpful in the future. NVIDIA just released a ray tracing engine for GPUs called OptiX, http://developer.nvidia.com/object/optix-home.html [developer.nvidia.com]
Most of My Houdini Renders [flickr.com]
My System [evga.com]
  • Quick Links