I am building an HDA for use with managing TOP networks in a very specific way, but I cannot seem to figure out how to accomplish referencing the target TOP network. I have tried the two nodes which appear like they should work, TOP Fetch or Work Item Import. When I target the TOP network that I'm using to test this with the TOP Fetch and try to generate the work items it just sits there processing. With Work Item Import it will cook the target network before importing unless I have the Generate Only box ticked, but then I cannot get the work items to cook the target network work items after the fact.
What the HDA I am trying to build is aiming to accomplish is to allow a user to target a TOP network/node and bucket the work items for that node into partitions, then process a single partition on a remote machine which loads the file with hqueue and executes a command. I have the partitioning working how I want, but cannot make the partitions within my HDA cook the TOP work items in the external TOP Network.
In this use case, the hqueue scheduler does not seem to be an option as the machine that is running the hqueue commands is inaccessible and not setup for this.
Is there a way to reference work items in remote TOP Network
2543 3 0-
- Adam F
- Member
- 52 posts
- Joined: April 2011
- Offline
-
- Adam F
- Member
- 52 posts
- Joined: April 2011
- Offline
Ok, so here is some of the code I am trying to mess with in a Python Processor:
This is the onGenerate code, which works beautifully until I try to cook the work items. At that point, they all go to a pdg.workItemState.Waiting and sit there. Even if I add code to the onCook function it does not actually seem to call it when I try to cook, it just waits. What am I missing here?
import hou controlNode = hou.node("../../") target = controlNode.parm("toppath").eval() targetTOP = hou.node(target) pdgNode = targetTOP.getPDGNode() for upstream_item in pdgNode.workItems: new_item = item_holder.addWorkItem(parent=upstream_item, inProcess=True) new_item.setStringAttrib("source_node", target) new_item.setIntAttrib("source_id", upstream_item.id)
This is the onGenerate code, which works beautifully until I try to cook the work items. At that point, they all go to a pdg.workItemState.Waiting and sit there. Even if I add code to the onCook function it does not actually seem to call it when I try to cook, it just waits. What am I missing here?
-
- Adam F
- Member
- 52 posts
- Joined: April 2011
- Offline
Ok, so I have gotten something that is moderately useful (for anyone who finds this in the future). I have written what amounts to my own partition script and execute in the Python Module of my HDA, completely abandoning using PDG to manage its own work items.
I am still working on a couple of bugs regarding blocking, sometimes it works, sometimes it doesn't. This is triggered from a Pre-frame script, so it should be good. Mantra renders are perfect, geometry caches aren't blocking for the work items to finish.
def process(node): print("Starting PDG Process") TOPNode = node.parm("toppath").evalAsNode() TOPNode.generateStaticWorkItems(True) PDGNode = TOPNode.getPDGNode() PDGContext = PDGNode.context partCount = int(node.parm("dataHold").eval()['machineCount']) partLength = math.ceil(len(PDGNode.workItems)/machineCnt) print("\t".join([f'{n}: {v}' for n, v in zip(["TOPNode", "PDGNode", "PDGContext", "partCount", "partLength"], [TOPNode, PDGNode, PDGContext, partCount, partLength])])) partList = [PDGNode.workItems[i * partLength:(i + 1) * partLength] for i in range((len(PDGNode.workItems) + partLength - 1) // partLength )] print(partList[hou.intFrame()-1]) PDGContext.cookItems(True, [a.id for a in partList[hou.intFrame()-1]], PDGNode.name)
I am still working on a couple of bugs regarding blocking, sometimes it works, sometimes it doesn't. This is triggered from a Pre-frame script, so it should be good. Mantra renders are perfect, geometry caches aren't blocking for the work items to finish.
-
- Adam F
- Member
- 52 posts
- Joined: April 2011
- Offline
-
- Quick Links
