Search - User list
Full Version: Memory allocation errors when rendering from Halo
Root » Technical Discussion » Memory allocation errors when rendering from Halo
Siavash Tehrani
Hi, this is on WinXP, build .404

I am rendering out a sequence from Halo that uses about 1300 frames. The frames all have a 32-bit fp plane, so they are quite big. I'm rendering to jpg and it appears that Houdini never relinquishes any memory when it writes the files to disk. After it passes the 1GB mark, after every 90 frames or so, it crashes.

Also, somewhat on topic: the jpg output quality setting is not remembered when you save a file. Doh!
malexander
What type of files are you loading in? What resolution are you using? Finally, what sorts of nodes are you using?

One thing you can do is check the compositing cache size - Settings->MainPrefs->Compositing, Cook Cache Size. If the size is close to 1Gb, try reducing it. Also, is ‘Reduce Cache Size when inactive’ on?

Thanks!
pbowmar
Sounds identical to the problems I've been having too. I'll try these things, Mark and let you know.

Cheers

Peter B
malexander
Any word on this at all?
Siavash Tehrani
Hi twod, here's a bit more info. The files are all .picnc. COPs used:

Add
Blend
Blur
Border
Color
Composite
Defocus
Depth of Field
Extend
Font
Layer
Shift
Snip

There are 43 nodes total. In any case, I don't believe it's a bug, but an optimization issue. I finished rendering the sequence before I had a chance to test the Cook Cache Size size option. Reduce Cache Size when Inactive was on. I will re-render the whole thing after I've rendered some additional frames.

I'm curious, why doesn't Halo just clear the cache after the frame (or lets say a number of frames) has been rendered to disk? I don't believe the cached data from a cooked frame is useful to subsequent frames (at least not with this network).

I should mention, I get that memory allocation error whenever Houdini goes above ~1,300,000kb of RAM usage, it's not unique to compositing. I have ~1,830,000kb worth of RAM total, so I guess Houdini is just running out of memory that it can tap?
malexander
Ah, you're running into the 32 bit OS limit

Basically, with a 32 bit OS, you have 2Gb of address space for a single process - regardless of how much memory or swap you have. It's called the ‘virtual address space’. And this is what you're running out of.

Now, you may think 1.3Gb is really short of the mark - but unfortunately, due to memory fragmentation and reserved memory for devices, your applications have less than 2Gb to play with. It varies per system, but 1.3-1.5Gb is often the limit.

Halo doesn't clear the cache because of still images & static animations. Many un-animated generators create these types of sequences, which are essentially time-invariant. So, if the results of these nodes are cached, there's less work to do. And over hundreds of frames, that's a good thing Also, if you happen to be blending frames with Tima or Time Scale, this caching comes in very handy.

However, if all your inputs are animated and you're running into this issue, make sure that your Cook Cache size is set to something smaller (say, 512Mb) or insert the command ‘compfree’ into the Post Frame script of the ROP. Halo will always use as much memory as it needs to complete a cook, so even if you have the Cook Cache set to 10Mb, it may violate the cache setting temporarily while it is cooking the frame in order to complete it. So you could try setting it to a very low value as well (but be aware that interactivity in the COPs viewer will suffer as a result).
Siavash Tehrani
Thanks twod.

Let me get this right…
First off, the animation is basically a fly-by, so there are no static elements, like a background. Also, I don't really need interactivity for this network. Basically I can set the Cook Cache to something like 100MB and (unless cooking a frame takes more ram) Halo should not use more than this (not taking other Houdini overhead into account)?

I can't test it right now, as my comp is occupied.
malexander
When you specify a Cache size, this is the amount of data that is kept around after it has been used in case it is needed again. This is not the amount of memory that COPs is restricted to use while cooking. However, because COPs cook in tiles (200x200), most of the time, you won't see a lot of memory being sucked up by COPs. One a cook is completed, all the memory used by COPs that isn't image tiles is freed (and if the cache size has been exceeded, it is immediately pruned).

This may seem ‘bad’, but all the other OPs do the same thing - allocate data as needed for the cook, and then free it. Usually, this isn't a big deal at all, as it's relatively minimal in the grand scheme of things. Other OPs also store the last cooked dataset in each node - which COPs doesn't do. Instead the last cooked dataset may be stored in the COP cache, which is what you set the size of in the preferences.

So, yes, you could even set the COP cache to 10Mb and COPs would cook correctly, but each new cook would likely have to recompute everthing. There are quite a few other things going on in parallel inside Houdini, though, like parm caching and such, so don't expect the memory after 1 cook to be limited to (previous memory usage) + (cop cache size). It should be somewhat close, within 10Mb or so. If this continues to increase after every frame without stabilizing, then there's a problem.
Siavash Tehrani
Thanks again twod.

I set the option to 150MB and that kept memory usage in check. I re-rendered the whole thing without incident.
This is a "lo-fi" version of our main content. To view the full version with more information, formatting and images, please click here.
Powered by DjangoBB