Search - User list
Full Version: Run partitions sequentially
Root » PDG/TOPs » Run partitions sequentially
Eyecon
Hello,
I have a relatively simple PDG network driving a pyro sim using wedges. In the network, I partition the various wedges based on wedge index and feed the output of the partition node into a ROP geometry output that runs and caches the sims. My main issue is that when I have a larger number of wedges, I run out of RAM as all the sims run at the same time. However, I have some downstream render and image processing nodes that I'd like to run in parallel as their frames become available. Is there a way to force the simulations(ROP Geo node) only to run sequentially rather than in parallel while leaving everything else as is?
chrisgreb
Yes, you can add the “Single” job parm [www.sidefx.com] for the Local Scheduler to the ROP Geometry node.
Eyecon
Yeah I saw that but that means that I have to have the default scheduler for everything that I still want to run in parallel while creating a local scheduler with “single” selected only for the ROP geometry?

I wonder if there a way to specify the total number of input jobs that should be processed by a given scheduler this way I can choose 1 (single) or a specific number depending on my system resources.
chrisgreb
You don't need another local scheduler. Just drag that job parm onto the ROP Geometry node. When you toggle it on, only the work items on that node will be affected by the single flag.
Eyecon
Thank you. Is there anyway to specify the exact number of input jobs?
chrisgreb
Yes. Using the ‘Slots per Worki Item’ job parm you can tweak how many work items are run concurrently.
https://www.sidefx.com/docs/houdini/nodes/top/localscheduler.html [www.sidefx.com]
Eyecon
Thanks Chris. I thought this only limits the number of CPUs per work item not the number of work items per node. Am I confused as to what “slots” mean? I understand that work items in my case are the input jobs (from wedge partitions) that I’m trying to limit but I want all system resources to be available for these work items. I was just asking if instead of running a single work item at a time if I could run say exactly two or three at a time.

When I tested single in the setup described above, it ran the simulations one at a time as expected but the system was mostly idle in the meantime(even though all other downstream top nodes were running in parallel). I want to tell my ROP geo to cache a maximum of 3 sims at a time for example because beyond that I run out of RAM.
chrisgreb
Slots are an abstraction of available compute resource. There is a different job parm “Houdini Max Threads” which controls multithreading by individual houdini jobs.

If you want at most 2 ROP Geo to run concurrently you can for example set “Total Slots” to 10 and “Slots Per Work Item” to 5. If you want some extra slots left over you can balance it by settings “Total” to 14, then even when 2 ROP Geos are running there will be 4 slots left for other work.
Eyecon
understood but I guess this process may not be deterministic for my setup because I want all 63 cores available to the scheduler to be used for the network in general. I guess since I setup the geo rop node for a siming I only it's using 2 slots per sim at any given point in time. So slots per node / 2 for the ROP geo node would determine the total number of sims if I'm understanding your explanation correctly.
chrisgreb
That's correct. In general it won't be deterministic because often there are multiple work items ready to be cooked with the same priority.
This is a "lo-fi" version of our main content. To view the full version with more information, formatting and images, please click here.
Powered by DjangoBB