chrisgreb
No, there's no way to specify that in TOPs. However most farm schedulers have a job specification option for ‘max runtime’ (and sometimes ‘min runtime’). We could expose that for HQueue and Tractor for example. For the local scheduler this would be an RFE that would require a bit more work.
This feature would make a lot of sense especially for the local scheduler, which it is used by people with low resources (= low amount of cores at disposal). A single core stuck forever in a workitem cooking, would have much more negative impact on the final result in single computer pdg session than in a farm.
I hope you will consider to implement it and I kind of see in the nature of PDG to have the user in control of the performance flow.
If the workitem task crashes that's a perfectly ‘normal’ way for a workitem to fail. It will be marked as failed and so all downstream dependencies will also fail or not be generated. The topnet will still keep cooking until there's no more other work to be done.
That's a good info and made me think about a very dirty workaround for point 1.
What if we make the workitem actually fail after some time has passed?
We could put a python node upstream with a timer that after 30“ would check a boolean attribute value in the final downstream node.
The bool is set by default to 0, and the last node would switch it to 1.
If the python timer still see it at 0 after 30”, it would crash Houdini.
I just checked that it's actually possible to delay the reading of an attribute on a node downstream.
Now the question, what's the simplest and safest way to crash Houdini with python?