Single and Multiple GPU questions

   6633   6   2
User Avatar
Member
2 posts
Joined: 11月 2014
Offline
Hello Houdini Community.
I am a new user and I have some questions in regards of GPU.

My Rig:

1 Titan X ,
1 Gtx 780
1 Quadro 2000
1 Tesla M2090

1.- Is it possible to use multiple GPU for OpenGl and hence increase viewport performance? My viewport performance is very slow when houdini is simulating the sand solver. ( The best combination I found was to use the Titan for particle simulations and the 780 for viewport)

2.- The eternal question “gaming” vs “professional” cards. Which is the best for OpenCl viewport and which is the best for OpenCL particle simulations? . I am planning to upgrade either the GTX780 or the Tesla M2090. I set up a basic scene of a falling sand solver and the titan was about 10s faster than the tesla to complete 50 frames. I am not sure if a more recent Tesla will offer a better performance than the Titan X. I do not need ECC so please do not use that as a reason to go for professional cards.

3.- Does OpenCl takes advantage of SLI for the particle simulations (in case of buying a second Titan) ? I know Houdini does not support multiple Gpu configuration for open cl but, will houdini treat two sli cards as 1 ?

4.- Is there a REAL performance boost while using Nvidia maximus? I read couple of case studies stating that maximus will increase the performance by splitting the graphical task to quadros and computational task to teslas but, you can set up those conditions manually. I can't find any video of people using maximus on houdini and so far the best results I've seen on internet are using the Titan X.

Thanks you for your time.
User Avatar
Member
4189 posts
Joined: 6月 2012
Offline
2. That question was settle a few years back. The main difference is the stability of the drivers. Gaming card drivers are retrofit for the latest and greatest game and thus break a lot.

3. SLI is to spilt the display monitor, not compute.

4. Is vapourware afaik.
User Avatar
Member
2 posts
Joined: 11月 2014
Offline
MartybNz
2. That question was settle a few years back. The main difference is the stability of the drivers. Gaming card drivers are retrofit for the latest and greatest game and thus break a lot.

3. SLI is to spilt the display monitor, not compute.

4. Is vapourware afaik.

Hey Marty, thanks you very much for your reply.

I think I need to reformulate my question.
Can you confirm that Houdini's OpenCL and OpenGl performance increases by using professional cards instead of their gaming counterparts (For example the Quadro M6000/ Tesla K40 vs the Titan X) ? I would like to know which type of card does OpenCl calculations faster in Houdini and also which can handle better the viewport.

In terms of driver stability I've been using the gtx 780 for long time in other CAD software without issues . And haven't break using Houdini so far.

If SLI takes cares of the display… Will I get better viewport performance in houdini?

Cheers.
User Avatar
Member
4189 posts
Joined: 6月 2012
Offline
For OpenCL performance I'd look at the ‘Processing Power (peak) GFLOPS’ of each card in the wiki below. Single Precision performance is the key one here regardless of Pro vs GeForce. There is some DP processing, if selected, but there also is clever coding where DP is used for accumulation to help with floating point round-off errors, this helps with the Maxwell SP/DP disparity.

https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units [en.wikipedia.org]

For OpenGL I'm not to date where the performance is ultimately derived from, i.e. Raster Units, shader clock, and openGl combinations but those cards, bar the Tesla, are wickedly fast currently. Same rule for pro vs geforce applies as before.

SLI is not used in Houdini AFAIK.

For driver stability the trick is to find a good driver and settle on it, until there may be a feature you want. i.e. 350+ has 64bit OpenCL memory access etc. Also note that the devs here are very quick to fix any bugs if they are timely reported by users too.
User Avatar
Member
47 posts
Joined: 7月 2014
Offline
sorry if this is covered since I'm skimming but are there plans to support multiple GPUs for OpenCL solvers (pyro, FLIP, etc) in Houdini?
User Avatar
スタッフ
5172 posts
Joined: 7月 2005
Offline
Mario Alejandro Bagnoli L
Hello Houdini Community.
1.- Is it possible to use multiple GPU for OpenGl and hence increase viewport performance? My viewport performance is very slow when houdini is simulating the sand solver. ( The best combination I found was to use the Titan for particle simulations and the 780 for viewport)

No, we don't support multiple GPUs for rendering a single viewport. The sand draw time is actually bound by z-sorting on the CPU for the sprites, not so much the draw itself (sprites are sorted back-front). GPUs are bad at sorting, so it's done on the CPU. This can get expensive as the #grains gets large.

4.- Is there a REAL performance boost while using Nvidia maximus? I read couple of case studies stating that maximus will increase the performance by splitting the graphical task to quadros and computational task to teslas but, you can set up those conditions manually. I can't find any video of people using maximus on houdini and so far the best results I've seen on internet are using the Titan X.

All Maximus does (I believe) is automatically route graphics tasks to the graphics card, and compute tasks to the Compute card. That's it. Pretty fancy name for a feature that should always have been present in the driver. It improves performance because you don't have memory thrashing going on in a single cards' VRAM between compute and graphics buffers. With a Titan-X though, this likely does not matter as much as it used to - 12GB is a pretty hefty amount of VRAM.
User Avatar
Member
47 posts
Joined: 7月 2014
Offline
I think that has more to do with the limitations of OpenGL than it does the driver. I don't know if any driver that can context switch from CL to GL without performance issues. This is one advantage of newer stateless APIs
  • Quick Links