Do pyro sims benefit from Dual GPU- Cards ?

   5319   12   2
User Avatar
Member
575 posts
Joined: Nov. 2005
Offline
As the new amd card is out (https://www.amd.com/en/products/professional-graphics/radeon-pro-duo-polaris)it would be very interesting to hear if voxelsims use both gpus then and if they share the memory. this card has 32 gig ram, that would be amazing for sims, but I have no idea how they behave in sims. is the memory split and by this reducing the sise of the containers that can be solved, are they working on the same voxel container and is houdini able to use the two gpus on one sim at all.

I would prefer a nvidia solution as they have superior dirvers, but they suck when it comes to ram, come on 11 gig!!! I hope there will be a titan version with much more ram. the quadro cards are way to expensive. the titans where perfect at there time.
User Avatar
Member
636 posts
Joined: June 2006
Offline
If you are a indie user get some threadripper with 128gb and split the sims. Can't see a better solution that works for that price and over 512gb ram is much cooler( for a indie mostly overkill but it's possible ). GPU are cool but driver issues are sometimes not funny.
User Avatar
Member
2036 posts
Joined: Sept. 2015
Offline
It would be interesting to know how Houdini takes advantage of more cores with sims/rendering(mantra);

In the context of comparing 16 Core Threadripper 1950X to 32 Core Threadripper 2990WX.
User Avatar
Member
575 posts
Joined: Nov. 2005
Offline
mh, interestings points, but not exactly what I asked for

my experience with the nvidia gpus is quite good, drivers are stable and houdini works very well with them, and they still beat any cpu

for the threadripper stuff, some pyro benchmarks would be very helpful, but I expect the 2990wx to drop in performance as half of the cores have no direct memory access, and the other half is slowed down as they have to deliver to the others. and all the render benchmarks I have seen so far do not use large datasets. as soon as You have massive texture access it will drop.
but lets see,it's just a wild guess
User Avatar
Staff
5156 posts
Joined: July 2005
Offline
To answer your original question, currently all OpenCL operatins can only target 1 device at a time. So having 2 GPUs doesn't help speed up sims, but it does help if you keep the graphics GPU separate from the Compute GPU. You'll have a lot more memory to work with and the graphics and compute won't swap one another out when VRAM gets tight.
User Avatar
Member
575 posts
Joined: Nov. 2005
Offline
thanks for clarification, twod. good to know
User Avatar
Member
258 posts
Joined: July 2006
Offline
How about targeting microsolvers to use separate cards. Would that cause double the vram usage yet faster solves. Can we tag microsolvers for which GPUs to pick.
Edited by tricecold - Nov. 28, 2018 12:22:59
Head of CG @ MPC
CG Supervisor/ Sr. FX TD /
https://gumroad.com/timvfx [gumroad.com]
www.timucinozger.com
User Avatar
Member
4495 posts
Joined: Feb. 2012
Offline
If you are a CEO FX TD, might be good to pick up one of these:

Senior FX TD @ Industrial Light & Magic
Get to the NEXT level in Houdini & VEX with Pragmatic VEX! [www.pragmatic-vfx.com]

youtube.com/@pragmaticvfx | patreon.com/animatrix | animatrix2k7.gumroad.com
User Avatar
Member
3 posts
Joined: April 2015
Offline
twod
To answer your original question, currently all OpenCL operatins can only target 1 device at a time. So having 2 GPUs doesn't help speed up sims, but it does help if you keep the graphics GPU separate from the Compute GPU. You'll have a lot more memory to work with and the graphics and compute won't swap one another out when VRAM gets tight.

How do we specify the gpu to use?
User Avatar
Member
30 posts
Joined: July 2013
Offline
kaymanas
twod
To answer your original question, currently all OpenCL operatins can only target 1 device at a time. So having 2 GPUs doesn't help speed up sims, but it does help if you keep the graphics GPU separate from the Compute GPU. You'll have a lot more memory to work with and the graphics and compute won't swap one another out when VRAM gets tight.

How do we specify the gpu to use?

I am very curious about this as well as I have a dual gpu system on the way.
User Avatar
Member
900 posts
Joined: Feb. 2016
Offline
I think you shoud look at Preferences - Miscellaneous - OpenCl Device
User Avatar
Member
23 posts
Joined: July 2017
Offline
how can it benefit fro dual gpu if it barely benefits from one , go with cpu cores , way cheaper and take as much ram as u need
User Avatar
Member
2529 posts
Joined: June 2008
Offline
If you're running two or more GPUs, run your displays off the lowest powered card, and reserve the one with the most vRAM for OpenCL usage.

Have you tried out the example file at this link?
https://www.sidefx.com/forum/topic/25234/?page=1 [www.sidefx.com]

As I get more confident using the UpRes system, I find myself running more lower resolution sims, leveraging OpenCL for faster feedback. This allows me to tweak the same simulation many more times rather than brute forcing it through hours on the CPU. Once I get the look locked down, I'll upres it to a final VDB sequence, which is CPU driven by default.
Edited by Enivob - April 30, 2020 14:01:32
Using Houdini Indie 20.0
Ubuntu 64GB Ryzen 16 core.
nVidia 3050RTX 8BG RAM.
  • Quick Links