Is 16GB 5080 enough for Karma, COPS, VFX etc in H21?

   1151   11   2
User Avatar
Member
395 posts
Joined: 3月 2009
オフライン
Thinking of selling my RTX 3090 24GB and replacing with RTX 5080 16GB…

Hoping that rendering will get much faster, but…

Will the decrease in RAM amount negatively impact speed or ability to deal with larger scenes in general in Karma, Solaris, COPS, Solvers, ML, and other GPU accelerated things like deformers in H21?

Regards,
Luke
User Avatar
Member
62 posts
Joined: 9月 2008
オフライン
Not a good idea.
16GB VRAM is not sufficient for even small scale MPM simulation.
Furthermore, new GPU-based sparse pyro solver will need more VRAM.
User Avatar
Member
8 posts
Joined: 3月 2013
オフライン
Keep the 3090
User Avatar
Member
395 posts
Joined: 3月 2009
オフライン
Rusoloco73
Keep the 3090

I thought karma was able to utilize the new GPU better and more efficiently and with all the machine learning node I thought the tensor cores would be more important no?
User Avatar
Member
654 posts
Joined: 6月 2006
オフライン
LukeP
Rusoloco73
Keep the 3090

I thought karma was able to utilize the new GPU better and more efficiently and with all the machine learning node I thought the tensor cores would be more important no?

More RAM is always better in the GPU. even when there are optimisations you can't compensate less memory.
User Avatar
Member
16 posts
Joined:
オフライン
LukeP
Rusoloco73
Keep the 3090

I thought karma was able to utilize the new GPU better and more efficiently and with all the machine learning node I thought the tensor cores would be more important no?

i think speed vs vram is a straightforward situation:
-if it's slow, you can solve it by waiting longer,
-if it doesn't fit into vram you can't solve it.
it's up to you if you want to risk the second option.
Edited by villain - 2025年8月13日 06:54:35
User Avatar
Member
395 posts
Joined: 3月 2009
オフライン
I was hoping that maybe Karma or the animation or the ML or the solvers would partition and load/unload complex scenes to make them fit piece by piece within less ram
User Avatar
Member
285 posts
Joined: 6月 2016
オフライン
LukeP
I was hoping that maybe Karma or the animation or the ML or the solvers would partition and load/unload complex scenes to make them fit piece by piece within less ram
Same here, I have a RTX 4070 Super TI with 16GB VRAM, a darn good GPU, I will try to pack as much as I can into that 16GB !!! 😅
User Avatar
スタッフ
5261 posts
Joined: 7月 2005
オフライン
One thing that wasn't really covered in the Keynote but significantly impacts how VRAM is used within Houdini is the new VRAM resource system. Every system in Houdini that uses VRAM registers itself to a VRAM resource (one per GPU). If an allocation fails, it will request that the GPU resource attempt to free up VRAM -- it then notifies the other GPU clients and they attempt to free up VRAM. The viewport will downres textures, XPU will free up non-essential data structures, COPs will free cached buffers, etc. Note this currently doesn't work across Houdini sessions or husk invocations (hopefully soon).

That being said, I'd still prefer the 24GB over 16. If the other clients have freed all their VRAM and an allocation still fails, it'll fail. And if other GPU clients are freeing data, they may need to recreate it later, so this will eat into your performance gain with the 5080. It also can't do anything about VRAM usage by other applications. So avoiding the low-VRAM situation is still the ideal scenario -- this feature simply tries to keep Houdini cooking or renderering when it occurs.
User Avatar
Member
395 posts
Joined: 3月 2009
オフライン
Thanks all.
Malexander - does h21 support multiple gpus? If I added another graphics card would Karma and other resources use it?

I was looking at 5090 but the prices are insane for mortals like me lol.
User Avatar
Member
395 posts
Joined: 3月 2009
オフライン
malexander
One thing that wasn't really covered in the Keynote but significantly impacts how VRAM is used within Houdini is the new VRAM resource system. Every system in Houdini that uses VRAM registers itself to a VRAM resource (one per GPU). If an allocation fails, it will request that the GPU resource attempt to free up VRAM -- it then notifies the other GPU clients and they attempt to free up VRAM. The viewport will downres textures, XPU will free up non-essential data structures, COPs will free cached buffers, etc. Note this currently doesn't work across Houdini sessions or husk invocations (hopefully soon).

That being said, I'd still prefer the 24GB over 16. If the other clients have freed all their VRAM and an allocation still fails, it'll fail. And if other GPU clients are freeing data, they may need to recreate it later, so this will eat into your performance gain with the 5080. It also can't do anything about VRAM usage by other applications. So avoiding the low-VRAM situation is still the ideal scenario -- this feature simply tries to keep Houdini cooking or renderering when it occurs.

Also does this mean hopefully that like karma xpu the solvers using opencl will switch to CPU vs crashing?
User Avatar
スタッフ
5261 posts
Joined: 7月 2005
オフライン
LukeP
Thanks all.
Malexander - does h21 support multiple gpus? If I added another graphics card would Karma and other resources use it?

I was looking at 5090 but the prices are insane for mortals like me lol.

You can specify a second GPU as an OpenCL device. XPU I believe will use multiple GPUs, but I'm not 100% on that. The viewport uses the GPU used by the Window Manager and if COPs are on the same GPU, it can display the image very efficiently. If COPs is using a different GPU, it'll need to download and upload the image to the other GPU. You get the benefit of an independent memory pool, but lose a bit of interactivity in the process.

The viewport team doesn't have any plans to use multiple GPUs; the benefit doesn't justify the complexity.

Also does this mean hopefully that like karma xpu the solvers using opencl will switch to CPU vs crashing?

This system is independent of that, but a sim will send a request for other VRAM users to free up memory when it can't allocate memory so it should help minimize that situation.
  • Quick Links