HARS.exe still growing after cooking HDAs sequentially

   4658   7   1
User Avatar
Member
21 posts
Joined: March 2019
Offline
Hi all,

I am investigating an out of memory crash and I found that it is actually not our engine, but HARS.exe that is growing crazy for us,
eventually causing crashes. This is what we are doing:

- We have a set of e.g. 100 cookings of the same HDA only with different input data planned and called sequentially from our engine using HAPI.
- We are removing all the assets from the session after the cooking so that the session is fresh for the next cooking
- We are sharing one session as initializing it is quite costly.

Is there a command we are missing to clear the intermediate data the HARS.exe is keeping?

Thanks!
Kind regards
Tom
User Avatar
Staff
534 posts
Joined: Sept. 2016
Online
Hi Tom,

Have you checked that your HE integration is cleaning up all the input data properly ?
(ie, the asset nodes, but also the nodes created for the input data etc…)

You could try inspecting what's happening in two ways:
- manually save the internal hip file before the crash to check for “leftovers” (ie after 50 iterations)
- connect to a “live” Houdini, by using the Houdini Engine Debugger.
(in Houdini, Windows > Houdini Engine Debugger, then start a session that matches your integration's setting)

(for the debugger, the docs [www.sidefx.com] have some extra details on how to use it)
Edited by dpernuit - Oct. 29, 2019 13:25:19
User Avatar
Member
21 posts
Joined: March 2019
Offline
Hi dpernuit,

thanks, I did not know about the Houdini Engine Debugger, will certainly try that!

For the other suggested way - I am actually saving the hip file with every iteration, thus I know that all the nodes and assets are cleaned.

I will try the debugger and update this once I know more, thanks!

Cheers
Tom
User Avatar
Member
21 posts
Joined: March 2019
Offline
Hi dpernuit,

so I have tried following methods, none of which showing increasing amount of data in my session:

- dumping to hip file after every iteration
- dumping to hip file after every iteration with locked nodes (HAPI_SaveHIPFile(session, file_path, lock_nodes = true );
- checking the session status with Houdini Engine Debugger

in all cases, the hip file size and content inside the session was stable, not increasing. HARS.exe was however still growing for me

Any ideas?
Thank you
Tom
User Avatar
Member
2 posts
Joined: May 2006
Offline
tpastyrik2k
Hi dpernuit,

so I have tried following methods, none of which showing increasing amount of data in my session:

Tom

I have a similar problem when using HAPI, memory is not being released. I can see the memory being used when calling each Cook, but cannot find a way to easily release. The only way I have to release so far is to close the session and reopen a new one, but this as Tom says is costlier than I would like. It's not a small amount of memory, with my current dataset it's over 20GB not being released, each session run, and it's enough of a spike to limit what I can do.

I have tried clearing the sop cache (and all the caches) on the fly, and deleting all created nodes with no success. Clearing sop did make some difference, but not significant.

thanks,
grant
User Avatar
Member
571 posts
Joined: May 2017
Offline
Could both of you describe the nodes and any tools you are using within your HDAs? Perhaps there is a particular node/asset that is leaking memory somewhere, and getting a list from both of you can help narrow it down.

Also does the same tool cause Houdini memory to grow as well?
Edited by seelan - Nov. 4, 2019 14:06:56
User Avatar
Member
2 posts
Joined: May 2006
Offline
seelan
Could both of you describe the nodes and any tools you are using within your HDAs? Perhaps there is a particular node/asset that is leaking memory somewhere, and getting a list from both of you can help narrow it down.

Also does the same tool cause Houdini memory to grow as well?

I set up a simple test to demonstrate - file_read->pack->unpack->attribWrangle->null.

File read is loading a bgeo, wrangle is adding a width attribute, pack/unpack is just there because I usually unpack (but this particular bgeo isn't packed).

Cooking that null in a loop and extracting some data from the geometry does not leak memory, repeated cooks do not increase beyond my use of the extracted data.

If I split that wrangle out to 10 nulls (each connected to the wrangle) and cook each one, it then uses additional memory that won't seem to release - but is released when I close the session (I'm printing used memory before/after to stdout).


Exploring this a little further in my search for how to release the memory, I tried turning off the ‘copyinput’ param of each cooked null after I was finished with it and cooking that again before moving on - that cooks to nothing and released the memory: the before/after session close memory report was very similar and I was about 5GB down from my previous peak in this test.

Which seems like it might be something of a workaround in the meantime for the issue - it is improving my memory use in a far more complex setup, and while there is some memory not released until the session is closed it's significantly down.

I don't believe this is causing any issues in a UI session of houdini.


thanks,
grant
Edited by pooverfish - Nov. 7, 2019 09:05:43
User Avatar
Member
21 posts
Joined: March 2019
Offline
Hi guys,

for me it is very hard to say - we have a lot of HDA generators with big networks inside, it is happening for all of them. I will try to get some sample hip maybe with locked inputs for you, if it helps.

Our plan for now is to kill the session and start a new one if HARS.exe is using too much memory after generation which is not nice, but should unblock us (for a price of reinitializing the session and reloading all the assets => more time.. )

Thanks
Tom
  • Quick Links