pbowmar
April 18, 2019 14:05:19
Hi,
Using Hqueue, 17.0.229, I have a bunch of PDG tasks on Hqueue and one of them failed in Hqueue, likely due to a crappy old machine running out of RAM
So I just rescheduled it after disabling the old machine, and the frame finished fine.
Sadly, PDG staunchly reports that frame as Failed and won't carry on.
What to do?
Cheers,
Peter B
chrisgreb
April 18, 2019 14:17:06
You can set your ‘Cache Mode’ parm to ‘Read’ or ‘Automatic’ and re-cook your node. It should pick up the expected output and all your items as cooked right away.
pbowmar
April 18, 2019 15:09:32
Genius!
However, what happens on a longer render when it's overnight? EG we have scripts or people that will fix the issue on the farm but wouldn't have access to the PDG graph directly to manually Recook like that.
Any way to have it auto-recheck every 30 seconds or something?
Cheers,
Peter B
chrisgreb
April 22, 2019 08:13:42
When work items fail there's no way to make PDG try them again except by stopping the cook and starting a new cook.
But I think it would be a good RFE to have a mechanism for automatic retries during a cook.
jason_iversen
April 22, 2019 15:32:05
That would point back to the beta-era request to have TOPs able to continually attempt to solve network, perhaps? ie. this seems like a Re-Run Until Done variation on some kind of Re-Run Continually behaviour.
Andrew Graham
Dec. 3, 2019 19:17:59
This ability would be useful. So far I have been using pdg in interactive sessions, so failed frames are fine in that scenario if you just resubmit something that is fast to execute. But anything that takes a long time or that submits pdg on a remote system will need some number of retries of tasks before bailing out anything being affected or downstream. we also wouldn't want to exit simulations for example if a task that is a sibling hits the max failure limit, so stopping the whole graph would be undesirable.
chrisgreb
Dec. 3, 2019 21:02:46
Andrew Graham
But anything that takes a long time or that submits pdg on a remote system will need some number of retries of tasks before bailing out anything being affected or downstream. we also wouldn't want to exit simulations for example if a task that is a sibling hits the max failure limit, so stopping the whole graph would be undesirable.
FYI Local Scheduler now has ‘Exit code handling’ which can be used to retry, and Hqueue Scheduler has a ‘retries’ job parameter that can be set.
Andrew Graham
Dec. 5, 2019 07:52:45
That's good to know. So with hqueue - would it bail out on a sim if other tasks downstream are failing or would that sim be safe to finish? It would be great to see this in Deadline too if it isn't already there.
chrisgreb
Dec. 5, 2019 09:52:42
Yes, for example if a partition contains a sim and other work items that fail before the sim is finished, the cook will carry on until all ready items are finished.
Marco_M
June 6, 2020 16:56:52
chrisgreb
You can set your ‘Cache Mode’ parm to ‘Read’ or ‘Automatic’ and re-cook your node. It should pick up the expected output and all your items as cooked right away.
What if we are using ROP Alembic? Like when we have a heavy mesh been exported in a single file, where some frames are failing?
A workaround would be to export an alembic sequence… But I've no sure if there is a way to merge them together later or if we have to create another task just for this.
chrisgreb
June 7, 2020 15:17:04
If you want to ignore failed items you can use a Filter by Expression node with a python expression that removes the failed ones:
> pdg.workItem().state != pdg.workItemState.CookedSuccess
However every time you cook the failed items will be re-run.
mestela
June 7, 2020 19:24:32
chrisgreb
FYI Local Scheduler now has ‘Exit code handling’ which can be used to retry, and Hqueue Scheduler has a ‘retries’ job parameter that can be set.
Really need this for tractor too!