Jeff Lait

jlait

About Me

Expertise
Developer
Location
Not Specified
Website

Connect

My Tutorials

obj-image Masterclass
Houdini 16.5 Masterclass | OpenCL
obj-image Masterclass
Houdini 16 Masterclass | Heightfields
obj-image Masterclass
Houdini 16 Masterclass | Compiled SOPs
obj-image Masterclass
H15 Masterclass | Grains
obj-image Masterclass
H15 Masterclass | Loops
obj-image Masterclass
H15 Masterclass | Distributed Simulations

Recent Forum Posts

Houdini can only load 1.5 TB of Flip Data then crashes! July 7, 2018, 9:27 a.m.

Awaiting more information….

madcat117
it always crashes at 1.5 TB to 1.4 TB of loaded RAM cache use!

Is this referring to memory_use.hip? Or your own FLIP loading tests?

jlait
Does Windows report any interesting messages when it takes Houdini down?

Ideally, if you can also try a Linux distro (Can run apprentice off a thumb-stick Linux?) we can get a good idea if this is something in Houdini or the OS.

Thanks,

Houdini can only load 1.5 TB of Flip Data then crashes! July 3, 2018, 4:29 p.m.

We don't have any known limits at that point, but there are always surprises….

Biggest linux machine I've run on is 1.5TB, interestingly enough, so while that worked right up to 1.5TB it doesn't answer the question about beyond :>

“Crash” can be a rather vague term. Does Windows report any interesting messages when it takes Houdini down?

If you can try on Linux, that would help swiftly separate whether this is an OS issue or Houdini issue. The closest I can think of for a Houdini issue would be someone using a int32 to store a memory size in kb. But that would more overflow around 2TB.

I can't find anything around 1TB here: Server 2016 seems enabled right up to 24TB.
https://docs.microsoft.com/en-gb/windows/desktop/Memory/memory-limits-for-windows-releases [docs.microsoft.com]

Attached is a .hip file that uses 4GB per frame by initializing 1024^3 volumes (and making sure they aren't displayed so you don't use more memory…) It should be a lot faster for hitting the 1.5TB limit. It also might reveal if it is *how* we are allocating the memory that is failing.

A long while ago we had a 48gb limit on Linux's default allocator because NVidia reserved the 2GB address space, which caused sbrk() to fail and fall back to mmap(), which has a hardcoded limit of 64k handles…. There might be a similar thing we are hitting here…

File size getting out of hand when creating terrain ? June 20, 2018, 9:41 p.m.

You can also cache out a height field with a “File Cache” SOP or File SOP as it can be saved as .bgeo.sc. This is a 3d file format, but it stores the heightfield as a 2d volume so will round-trip seamlessly.

The growing file size is probably due to heightfield paint. To avoid having to re-apply the strokes every time you load a file, it caches out the final painting you did as a layer. This will thus bloat the .hip file.