MartybNz Out of interest are there particular nodes in simulations that benefit more than others with this?
Sorry, I missed this one. Er, I'm not sure to be honest. The patterns that the old TBB allocator had problems with were situations in which one thread would allocate memory that another thread would end up freeing.
Having the same issues on Windows as everybody else and wanting to get into large scale Flip simulations I decided to install Ubuntu 14.04 Lts alongside my Windows 7 Installation. Thought this would be the key to solving all my problems. It took me a few days to get my head around all this and finally I could start a simulation which I already tested on Windows.
A simulation with a constant particle count of about 40 to 46 million starts occupying about 20 GB of my 48gb of ram fills up the memory completely within about 100 frames and crashes Houdini. (I did not activate a swap partition because I thought I would not need it for now). Now in Windows the Ram also fills up but to a far lesser amount then in Linux. With the exactly same sim it started with 15gb an went up to 35 GB at frame 160.
I also noticed that the .sim files written out on Linux are bigger then the ones written out on Windows and really the only thing I changed in the Houdini file is the location of the Explicit Cache.
I already set the Cache Memory to 0 and enabled Explicit cache. As I am totally new to Linux I`m sure that I`m missing some settings.
I'm sorry I know that it is an old subject… But it seems that I'm having this kind of issue with H14 on Windows 7, with the pyro solver. I only have 16gbs of ram, the cache size is set to 0, I turn on ‘caching to disk’. My simulation works well but when it's reaching 16gbs of ram it's too slow…
Is there a way to avoid that ? Is Windows 10 a solution ?
I have noticed this behaviour in 14.0.346 as well where
- I run a pipeline once which consumes about 7 GB when done (and houdni sticks to those 7 GBs after completion. I presume due to the TBB mallocs wanting to have the memory allocated)
- starting a new scene does NOT release the memory
- after a while idle with a new scene (I do not know exactly how long, but a long time) memory is released
- closing houdini releases the memory
- starting a new instance of the pipeline and running it does NOT release the memory
its a bit of a scary behaviour as we are trying to run our pipelines in farm machines with not a ton of ram, so this is becoming quite limiting
does the 12.5 tbb dll hack still works? or is there a different way to address this issue (anyway to force to release the memory? )