Search - User list
Full Version: "Single" flag on scheduler does not work
Root » PDG/TOPs » "Single" flag on scheduler does not work
toadstorm
Using Houdini 17.5.293 on Windows 10.

I have about a dozen workitems in a TOP network, and I'm trying to troubleshoot some unexpected failures, so I wanted to execute just one workitem at a time. The “single” parameter on the localscheduler seems like it would be the right option to use, but all of the workitems are still processing simultaneously. Likewise, I tried the “Single” parameter in the TOP's scheduler overrides section, and it does nothing. Does this flag actually work? Am I misinterpreting how it's supposed to operate?
chrisgreb
That should work - I can't reproduce the problem here. The only caveat is that if only one node has the single flag, that mode will only be active as long as that node is executing items.

Can you try it on a simple graph with just a generaticgenerator and see if it works in that case?
toadstorm
Yeah, it just executes everything simultaneously. Attaching my file… maybe I'm using this wrong, but I'd have thought that the workitems should execute sequentially if that flag is enabled.
chrisgreb
Thanks, the single flag only applies to work items that are actually scheduled with the local scheduler (as separate processes). You are using an in-process pythonscript node which is handled by the PDG internal scheduling. There's an existing RFE to expose some control over that internal scheduler, I will add this ‘single’ mode to that RFE.

In the meantime a workaround for you is to put the node inside a For-Loop and toggle Iterations from Upstream Items - this will force sequential execution of the items.
toadstorm
Thanks Chris!

Is there a detailed explanation somewhere that describes the difference between PDG internal scheduling and the local scheduler? I'll admit I had no idea that there was a distinction between the two.
RichardFr
Are there any updates about the progress or the priority of this RFE?
I have multiple in-process python scripts nodes in my setup, which should not be executed simultaneously. The for loop solution doesn't work in this case.
chrisgreb
Rico571
Are there any updates about the progress or the priority of this RFE?
I have multiple in-process python scripts nodes in my setup, which should not be executed simultaneously. The for loop solution doesn't work in this case.

No progress - it's still on the RFE list.

If they're all in-process another standard workaround is to use a global lock to ensure only one script executes the critical section.

For example you can use a single upstream python script work item to create the lock and set it on a module object like:
import threading
threading.__mylock = threading.Lock()

then each of your parallel scripts can do their work protected by it:

import threading
with threading.__mylock:
    print ("HI from " + str(`@pdg_index`))
RichardFr
Thanks Chris, that works perfectly.
This is a "lo-fi" version of our main content. To view the full version with more information, formatting and images, please click here.
Powered by DjangoBB