We don't have any known limits at that point, but there are always surprises….
Biggest linux machine I've run on is 1.5TB, interestingly enough, so while that worked right up to 1.5TB it doesn't answer the question about beyond :>
“Crash” can be a rather vague term. Does Windows report any interesting messages when it takes Houdini down?
If you can try on Linux, that would help swiftly separate whether this is an OS issue or Houdini issue. The closest I can think of for a Houdini issue would be someone using a int32 to store a memory size in kb. But that would more overflow around 2TB.
I can't find anything around 1TB here: Server 2016 seems enabled right up to 24TB.
https://docs.microsoft.com/en-gb/windows/desktop/Memory/memory-limits-for-windows-releases [
docs.microsoft.com]
Attached is a .hip file that uses 4GB per frame by initializing 1024^3 volumes (and making sure they aren't displayed so you don't use more memory…) It should be a lot faster for hitting the 1.5TB limit. It also might reveal if it is *how* we are allocating the memory that is failing.
A long while ago we had a 48gb limit on Linux's default allocator because NVidia reserved the 2GB address space, which caused sbrk() to fail and fall back to mmap(), which has a hardcoded limit of 64k handles…. There might be a similar thing we are hitting here…