Search - User list
Full Version: Q: telling PDG "failed" is actually "OK"
Root » PDG/TOPs » Q: telling PDG "failed" is actually "OK"
pbowmar
Hi,

Using Hqueue, 17.0.229, I have a bunch of PDG tasks on Hqueue and one of them failed in Hqueue, likely due to a crappy old machine running out of RAM

So I just rescheduled it after disabling the old machine, and the frame finished fine.

Sadly, PDG staunchly reports that frame as Failed and won't carry on.

What to do?

Cheers,

Peter B
chrisgreb
You can set your ‘Cache Mode’ parm to ‘Read’ or ‘Automatic’ and re-cook your node. It should pick up the expected output and all your items as cooked right away.
pbowmar
Genius!

However, what happens on a longer render when it's overnight? EG we have scripts or people that will fix the issue on the farm but wouldn't have access to the PDG graph directly to manually Recook like that.

Any way to have it auto-recheck every 30 seconds or something?

Cheers,

Peter B
chrisgreb
When work items fail there's no way to make PDG try them again except by stopping the cook and starting a new cook.
But I think it would be a good RFE to have a mechanism for automatic retries during a cook.
jason_iversen
That would point back to the beta-era request to have TOPs able to continually attempt to solve network, perhaps? ie. this seems like a Re-Run Until Done variation on some kind of Re-Run Continually behaviour.
Andrew Graham
This ability would be useful. So far I have been using pdg in interactive sessions, so failed frames are fine in that scenario if you just resubmit something that is fast to execute. But anything that takes a long time or that submits pdg on a remote system will need some number of retries of tasks before bailing out anything being affected or downstream. we also wouldn't want to exit simulations for example if a task that is a sibling hits the max failure limit, so stopping the whole graph would be undesirable.
chrisgreb
Andrew Graham
But anything that takes a long time or that submits pdg on a remote system will need some number of retries of tasks before bailing out anything being affected or downstream. we also wouldn't want to exit simulations for example if a task that is a sibling hits the max failure limit, so stopping the whole graph would be undesirable.

FYI Local Scheduler now has ‘Exit code handling’ which can be used to retry, and Hqueue Scheduler has a ‘retries’ job parameter that can be set.
Andrew Graham
That's good to know. So with hqueue - would it bail out on a sim if other tasks downstream are failing or would that sim be safe to finish? It would be great to see this in Deadline too if it isn't already there.
chrisgreb
Yes, for example if a partition contains a sim and other work items that fail before the sim is finished, the cook will carry on until all ready items are finished.
Marco_M
chrisgreb
You can set your ‘Cache Mode’ parm to ‘Read’ or ‘Automatic’ and re-cook your node. It should pick up the expected output and all your items as cooked right away.

What if we are using ROP Alembic? Like when we have a heavy mesh been exported in a single file, where some frames are failing?

A workaround would be to export an alembic sequence… But I've no sure if there is a way to merge them together later or if we have to create another task just for this.
chrisgreb
If you want to ignore failed items you can use a Filter by Expression node with a python expression that removes the failed ones:

> pdg.workItem().state != pdg.workItemState.CookedSuccess

However every time you cook the failed items will be re-run.
mestela
chrisgreb
FYI Local Scheduler now has ‘Exit code handling’ which can be used to retry, and Hqueue Scheduler has a ‘retries’ job parameter that can be set.

Really need this for tractor too!
This is a "lo-fi" version of our main content. To view the full version with more information, formatting and images, please click here.
Powered by DjangoBB