Search - User list
Full Version: Wait for all inside a mayaserver - hanging
Root » PDG/TOPs » Wait for all inside a mayaserver - hanging
liorbenhorin
Hey,

New to PDG here!

Im attempting to use it for some Maya file processing, in which I hope to process a simple scene,
extract some 'assets' from it, make some tweaks to them, then save the Maya scene.

All goes well, except that when I use Work Items Expand node to work on each asset, I cant seem to use the wait for all node to then consolidate back and save my scene.

The execution is getting stuck at the wait for all node.
If I then click it and 'Cook selected', the process is then completed.

I might be getting the idea of server all wrong, so if there are better ways to achieve my goal I would appreciate it!


Attached is an example hip file and the maya file linked in it.
topnet is at /tasks/topnet1


Houdini 18.5.462
OSX 11.2.1
liorbenhorin
I think i solved this using a PartitionByAttribute node instead of the WaitForAll. set to upstream static nodes and looking for the 'loopnum' attribute distinct values.

Still would love to hear other ways or validation that this is how this should be done!
nimnul
Hi,

To do that you don't actually need a partitioner as the Command Server End is a partitioner itself and does it for you.
I tweaked your example file a bit to illustrate it, hope it helps.


If I'd rumble a bit about more complex cases, there might be some situations where an approach when you separate your info extraction and actual work into different sequent Command Chains would be more robust and easy to debug. Especially when those are executed on the farm and the cost of firing up two maya processes is of less priority.

In such case, the first command chain would accumulate all sorts of info about a given Maya file and store them as attributes. Then you'd split or partition the items and adjust the attribute values if necessary, and lastly, stick all that nice juicy work items into one more command chain that would perform some heavy lifting type of activity(export, render, etc.).
One of advantages of such approach would be that you'd catch most of the troubles at a relatively low-cost stage and hopefully make heavy lifting run more smoothly.

Does it make sense?

Pavel
liorbenhorin
Hey,
Maybe my initial question was to confused, i was trying to achieve a process that would (simplified):

1. get some sets of objects (characters, props, lights)
2. process each group (maybe put it in a render layer)
3. when all the groups are processed, save the scene as a new version

So without a partitioner before the command-server-end, my 'save scene' would get executed multiple times (per each upstream work item)

This is why *i think the prior-to-save partioner is required: see attached screenshot
nimnul
Ah, I see. I cannot think of a better way to do that if you want your objects to be separated into different work items.

There is also a way to do it without expanding work items which might be handy in some cases:
liorbenhorin
Cool! Thanks!
This is a "lo-fi" version of our main content. To view the full version with more information, formatting and images, please click here.
Powered by DjangoBB