Search - User list
Full Version: Caching simulation ram issue
Root » PDG/TOPs » Caching simulation ram issue
I am caching a volume simulation on PDG where I have a couple of wedges of it but if I send it "out-of-process" during the simulation ram usage keeps increasing which I don't get when I do "in-process", I would prefer to use "out-of-process" but this way I run out of ram with time.

Any tips? Is it normal behavior? I am using build 435 so could it be a bug?

Any advice would be welcome.
By what you're describing, I would assume that your "out-of-process" tasks are running concurrently.

In the local scheduler node you can limit the number of concurrent tasks based on how much RAM is being used by one task (which you have to figure out yourself).
E.g. if you know that one task requires no more than 32GB and you have 128GB, you can limit the number of concurrent tasks to 3.
The parameters responsible for concurrency are "Total slots", "Slots per work item", and "Single", you can read more on them in the documentation.

Additionally you can reduce the memory footprint by optimizing your scene, like setting unload behaviour on some nodes and reducing RAM cache limits of your dopnets, if you haven't done so already.

Also, there's a RAM hard limit setting in the scheduler, but it simply kills any subprocess that exceeds it (which still might be useful).
Thanks for feedback

And Yes I am using single on the Job Parms tab and in the scheduler tab I am using "Equal to CPU count less one" so it can use all processor in this current task, should I reduce it to one?
Would it reduce the processing power on multithread tasks or even reduce the power processing on single tasks?
This is a "lo-fi" version of our main content. To view the full version with more information, formatting and images, please click here.
Powered by DjangoBB