Evan Robinson

evanrudefx

About Me

freelance vfx
EXPERTISE
VFX Artist
INDUSTRY
Film/TV

Connect

LOCATION
United States

Houdini Skills

Availability

Not Specified

My Tutorials

obj-image Beginner
Sci-fi texture using Copernicus
obj-image Beginner
Cleaning Photoscans
obj-image Beginner
Tiling Textures with Random UV's
obj-image Beginner
Loopable Rainy Window

Recent Forum Posts

COP OpenCL caching Aug. 29, 2025, 12:04 p.m.

Cedric4826
I too have this issue. One of my previous scene with quite some COP networks would overload my vram when opening. I have a 5080 so 16gbs of vram but the scene was running fine previously.

I tried reopened the scene in 20.5, the vram usage would stable at about 9gbs. I think this vram issue is newly introduced with H21, dont think it has too much to do with too many cop node or resolution

Thanks for the reply. Sadly I cannot open this exact scene in the previous version to check, but I opened an earlier version of this hip (a version I was working on before houdini 21) and it pretty much did the same thing of using all the vram.

AslakKS
I would try to lower the allowed vram usage in the Copernicus Settings
Image Not Found

Thanks, this is way better (in my case) then setting the env variable because it looks like this gets saved into the hip, so I can set the vram limit in the scene. I missed this menu, so thanks.

COP OpenCL caching Aug. 28, 2025, 4:40 p.m.

Hello,

What is the best method to deal with this issue? When I open my hip file all the COPernicus nodes cook then I instantly run out of vram (on a 4090). Then houdini is slow and xpu falls back on cpu only (if I try to render in solaris with karma). Sometimes I had to restart my PC. However, immediately after opening the scene I can run
copcache -c
or use the cache manager to clear the cache. After that, it works as normal. I can continue adding nodes, rendering with xpu in solaris, etc.

How is it that I can just clear the cache and go back to doing things as normal? I can literally put the view flag on any node (after clearing the cache) and it will display instantly. I saw there is a environment variable to limit the amount of vram being used by COPs. Wondering what else I should do, it didnt seem like I should be hitting 24 gigs of vram so fast, am I just using too many nodes at too high a resolution (4k)?

thanks

stash SOP heavier cost than freezing Aug. 14, 2025, 7:21 p.m.

Liesbeth_Levick
While lewis_T is absolutely right, that is an interesting observation that I do not know the answer to. I would recommend submitting it as a question to SideFX support. It might take a few days to hear back because of Siggraph.

Oh and if your issue is time taken to read from a disk for multiple frames, it might help to use a cache SOP after a filecache node, setting the display flag and then running through your frame range so that it is cached to memory for that Houdini session. Not sure if that is helpful for you in your use case, but it's a trick I only learned recently

Thanks, good info. I was making an hda and I was trying to decide between stashing a few kilobytes worth of geo or embedding a bgeo.sc in the hda.