Quite a bit of digging to make this happen, figured I'd leave the answer here for posterity per usual
First, you'll need to use a python scheduler instead of the regular local scheduler to cook things.
The defaults are all fine, just add these lines right after the imports in the scheduling tab.
# auto-succeed if this isn't the work item we're meant to be cooking
if os.environ['PDG_ACTIVE_WORK_ITEM'] != str(work_item.id):
This will make PDG skip any work items that aren't meant to be cooking. Without this, you'll end up re-cooking all the upstream items on the farm, even if they've already been cooked by prior dependent jobs.
Be sure to update the default scheduler on your topnet.
Next, to actually cook a single work item, you'll want this bit of code. The comments explain it further but essentially graphContext.cookItems was the only function I could find that actually took individual items to cook, so we use that alongside setting PDG_ACTIVE_WORK_ITEM on the environment such that everything else auto-succeeds.
def cookWorkItem(node, index, block=True):
# generate static work items for this node, which will
# generate parents as needed
# likely that this only really works with static work items
# info about the work item we're cooking
pdgNode = node.getPDGNode()
context = pdgNode.context
workItem = pdgNode.workItems[index]
# set the active work item as an environment variable
os.environ['PDG_ACTIVE_WORK_ITEM'] = str(workItem.id)
# use the context to cook this work item
# our custom onSchedule function in the python schedule
# will skip anything that's not this PDG_ACTIVE_WORK_ITEM
All of this lets us run PDG jobs on the farm similar to how ROPs jobs would run, but with all the great features that come w/ PDG.