Does Houdini "over-commit" Memory? Memory allocation error

   3162   6   2
User Avatar
Member
122 posts
Joined: 9月 2018
Offline
Hey everyone,

For the last few days, I was troubleshooting my system because I wasn't able to write a moderately sized USD File through a USD ROP in LOPs.

Long story short: After disabling everything I realized that Houdini was still crashing not because of faulty memory or reaching the physical RAM limit, but the virtual memory limit.

I haven't changed anything on my system for a couple of months but recently have been running into memory allocation issues constantly.
Now I wonder if this was a Windows issue (maybe it reduced the virtual memory size without me knowing) or if Houdini commits too much?

When writing something to Disk the committed memory starts out roughly double what's currently in the physical RAM. But then it seems to add a GB every one or two frames. This seems somewhat excessive as the actually used memory was growing by roughly 300 MB a frame and I wasn't able to utilize more than 63 GBs of my 128 GBs before crashing.

This is a screenshot from my task manager:

(This is with the increased virtual memory. The RAM is usually faster but I disabled X-AMP while troubleshooting)

Again, I am not sure where the issue is exactly but I remember being able to use all of my 128Gbs without having to increase the virtual memory just a couple of weeks ago and then it suddenly started with constant memory allocation errors.

Maybe the next time somebody writes something to disk they could watch their Task Manager to see if they notice the same behavior as I am? Or is your virtual memory bigger by default?
Edited by No_ha - 2022年6月17日 06:25:22

Attachments:
2022-06-17 12_07_24-TaskManagerMain.jpg (88.7 KB)

User Avatar
Member
7710 posts
Joined: 7月 2005
Offline
That's weird. Maybe you could take a look at the houdinifx.exe in the Resource Monitor and see the actual memory usage breakdown. Another thing to try maybe is to set the environment variable TBBMALLOC_PROXY_ENABLE to 0 prior to starting Houdini. Things will be MUCH MUCH slower because it disables use of the "fast" memory allocator. But maybe worth a try to see if it makes a difference at all.
User Avatar
Member
122 posts
Joined: 9月 2018
Offline
Thank you for the answer Edward. I haven't tried to set the environment variable yet because I was hoping that with the increased virtual memory everything would be fine. But now I believe there is something wrong with the caching in Solaris. I was writing out a usd file which went fine but after that it didn't clear out the memory, resulting in another memory allocation error when I continued to work in Houdini.

Sorry for the photos but without memory I couldn't take screenshots:





There is little reason for Houdini to keep so much data in physical memory and absolutely no reason to keep so much data in virtual memory so that it crashes itself.
There isn't enough space on my System Disk to allow for even more virtual memory...
Edited by No_ha - 2022年6月24日 05:54:47

Attachments:
IMG_1168.jpg (3.9 MB)
IMG_1167.jpg (3.6 MB)

User Avatar
Member
7710 posts
Joined: 7月 2005
Offline
No_ha
Thank you for the answer Edward. I haven't tried to set the environment variable yet because I was hoping that with the increased virtual memory everything would be fine. But now I believe there is something wrong with the caching in Solaris. I was writing out a usd file which went fine but after that it didn't clear out the memory, resulting in another memory allocation error when I continued to work in Houdini.

I don't think you ever want to go into swap anyways so you better hope that you don't need more than what physical memory you have. The behaviour you're describing could be due to TBB Malloc so please try that first to rule out whether it really is something inside USD itself. If it is USD, then also try using the latest 19.0 daily build to see if it's still an issue.
Edited by edward - 2022年6月24日 08:58:03
User Avatar
Member
26 posts
Joined: 6月 2010
Offline
Any news on this?

I am encountering same problems. When loading bunch of USD file caches for cloth, Solaris tries to consume large amounts of RAM. On 128 gb machines works fine, while on 64gb it crashes. When render actually kicks in, it consumes less than 64gb of ram, just that start is problematic.
User Avatar
スタッフ
4435 posts
Joined: 7月 2005
Offline
Even in the original post, it isn't really clear what the root problem was... My best guess is that it related to a bug inthe USD library that was addressed in 19.5.332. But there are always going to be lots of ways to make USD consume lots of memory, bugs or no bugs. In the latest versions of 19.5 I am not aware of any bugs that are still contributing to this sort of problem.

So my suggestion would be that you generate a simple hip file that demonstrates the issue you're seeing using simple synthetic data (my favorite test case is a very high resolution torus with an animated Mountain SOP). If starting from the ground up doesn't let you reproduce the issue in a shareable way, try going top down by removing elements of your problematic hip file until the memory issue goes away. Sending a hip file into support that shows "these are the nodes that are causing the problem", even if you can't provide the raw USD source data, may give us enough information to figure out what the problem is (whether it's a USD bug or just something you're doing in your hip file that can be worked around). Thanks!
User Avatar
Member
122 posts
Joined: 9月 2018
Offline
Update from myself:

I "fixed" the issue by massively increasing the available RAM swap in Windows. For me, it wasn't an issue of the actually consumed RAM, just the committed RAM that increased exponentially and eventually led to a crash.
I have monitored this a couple of times in the last few weeks and it seems like it doesn't overcommit as much anymore. It usually only commits a few GBs over the used RAM now.


Otherwise, I didn't experience any RAM usage out of the ordinary.

Htogrom
I am encountering same problems. When loading bunch of USD file caches for cloth, Solaris tries to consume large amounts of RAM. On 128 gb machines works fine, while on 64gb it crashes. When render actually kicks in, it consumes less than 64gb of ram, just that start is problematic.

I assume you just ran out of RAM the normal way. If the scene is too big there is no way around using "flush each frame".
  • Quick Links