@pdg_input is the same as @pdg_input.0, which is also the same as how it works for @attribs in SOPs.
You can use @pdg_input.2 to access the value at index=2, for example. Or, you can use the HScript function pdginput()[ which takes three arguments, one of which is the index: https://www.sidefx.com/docs/houdini/expressions/pdginput.html [www.sidefx.com]
Found 445 posts.
Search results Show results as topic list.
PDG/TOPs » @pdg_input only returns first input
- tpetrick
- 596 posts
- Offline
PDG/TOPs » Simplest way to add a dependency in the Python TOP
- tpetrick
- 596 posts
- Offline
To add to that -- the setCommand API methods sets the command line string that gets run when the work is scheduled to cook out of process. It doesn't run immediately, it's just a string field on the work item, which is eventually run whenever the work item actually gets to cook. You can right-mouse button on the node and selected "Generate Node" to generate all tasks without running anything, and then inspect the dependencies of those work items, their command line string, etc.
As for traveling across TOPs, you might want to look into Feedback loops if you want the dependency relationship to be maintained. Feedback blocks create tasks that behave like a traditional for loop -- each iteration cooks all the way down to the bottom of the block, before beginning the next iteration at the top of the block. Iterations also cook in serial, whereas TOPs work items normally cook in parallel.
As for traveling across TOPs, you might want to look into Feedback loops if you want the dependency relationship to be maintained. Feedback blocks create tasks that behave like a traditional for loop -- each iteration cooks all the way down to the bottom of the block, before beginning the next iteration at the top of the block. Iterations also cook in serial, whereas TOPs work items normally cook in parallel.
PDG/TOPs » Simplest way to add a dependency in the Python TOP
- tpetrick
- 596 posts
- Offline
The setCommand call is just there to add a delay to the work items -- it's done using Python so that doesn't rely on any particular shell features. It's basically just a command line string that causes the work item to sleep for 2 second as their "job", so the work items don't cook instantly.
PDG/TOPs » Simplest way to add a dependency in the Python TOP
- tpetrick
- 596 posts
- Offline
You don't need to call the function, you can just paste the code in for adding dependencies. PDG invokes the callbacks automatically on the Python Processor for you, when needed. I've attached an example file that creates 10 work items in a Python Processor that sleep for 2 seconds each, and are sequentially dependent on one and other.
Edited by tpetrick - Aug. 17, 2023 23:48:17
PDG/TOPs » Simplest way to add a dependency in the Python TOP
- tpetrick
- 596 posts
- Offline
In order to add dependencies between work items in the same node, you need to implement the Add Internal Dependencies callback on the Python Processor. The generation step can't add dependencies. The purpose of the different node callbacks is described here: https://www.sidefx.com/docs/houdini/tops/processors.html [www.sidefx.com]
For example, the implementation used by the Generic Generator looks like the following, when making work items cook in order:
For example, the implementation used by the Generic Generator looks like the following, when making work items cook in order:
def onAddInternalDependencies(self, dependency_holder, internal_items, is_static): if not self['sequential'].evaluateBool(): return pdg.result.Success previous_item = None for internal_item in internal_items: if previous_item: dependency_holder.addDependency(internal_item, previous_item) previous_item = internal_item return pdg.result.Success
PDG/TOPs » top create attributearray?
- tpetrick
- 596 posts
- Offline
An expression with @attribute is the same as using @attribute.0, e.g. the first value of an array. This is behavior that's inherited from SOPs, which originally introduced the @attrib syntax. If you want to access all of the values in the array as a string, you need to use the pdgattribvals("attrib") function: https://www.sidefx.com/docs/houdini/expressions/pdgattribvals.html [www.sidefx.com]
PDG/TOPs » Changing a dropdown menu paramter via tops ?
- tpetrick
- 596 posts
- Offline
You can right-mouse click on the Solid Type and go to Expression -> Edit Expression, and then put @pdg_index into the expression editor. I've attached a simple example that generates five work items and imports a different platonic solid type from SOPs for each item. The example was created in H19.5, but the same thing should work in any version.
PDG/TOPs » Export multiple COPs textures
- tpetrick
- 596 posts
- Offline
You can point a ROP Fetch TOP node at a chain of ROPs, instead of a single ROP, and it'll render the whole chain and report the output files for each of the ROPs.
To export multiple textures you can create a chain of multiple Composite ROPs and point the ROP Fetch TOP at the last node in the chain.
To export multiple textures you can create a chain of multiple Composite ROPs and point the ROP Fetch TOP at the last node in the chain.
PDG/TOPs » TOPs ffmpeg audio
- tpetrick
- 596 posts
- Offline
Yep, in a parameter that's configured to use Python for its expression language, you can use pdg.input(..) to do the same thing as @pdg_input. There's also pdg.workItem() which will return the pdg.WorkItem instance that's being used to evaluate the parameter. Those functions are described at the very top of the pdg module doc: https://www.sidefx.com/docs/houdini/tops/pdg/#input [www.sidefx.com]
Edited by tpetrick - Feb. 27, 2023 19:32:24
PDG/TOPs » TOPs ffmpeg audio
- tpetrick
- 596 posts
- Offline
@pdg_output refers to the output of the current work item itself. If you want to use the output path produced by an upstream work item, you should use @pdg_input.
PDG/TOPs » Understanding TopNode.cookOutputWorkItems()
- tpetrick
- 596 posts
- Offline
If top_only is True then PDG does not cook at a all. Only the TOP nodes cooks -- this generates the underlying PDG nodes that corresponds to the TOP nodes, but does not create any work items.
The main use is so that you can call top_node.getPDGNode() on a TOP node that has not yet cooked, without actually creating any work items. If you put down a new TOP and call that method you'll notice it returns nothing -- that's because the TOP node hasn't cooked, and therefore has no underlying PDG node yet. Triggering a cook with top_only on that graph will ensure that all TOP nodes have a PDG node.
The main use is so that you can call top_node.getPDGNode() on a TOP node that has not yet cooked, without actually creating any work items. If you put down a new TOP and call that method you'll notice it returns nothing -- that's because the TOP node hasn't cooked, and therefore has no underlying PDG node yet. Triggering a cook with top_only on that graph will ensure that all TOP nodes have a PDG node.
Edited by tpetrick - Feb. 22, 2023 11:00:47
PDG/TOPs » Q: submit deadline job without needing Hython on the worker
- tpetrick
- 596 posts
- Offline
The Python Script generates one work item for each input work item, and configures that work item to run the script code you specified on the node. The Python Processor node on the other hand allows you to actually define how work items are generated, e.g. you can generate 3 work items from each input, or a random number of work items for each input work item, or some custom logic based on parameters. It's useful if you want to e.g. fan out the number of work items you have, as opposed to the Python Script which is always just a one-to-one relationship with the input.
As for running with hython, the Python Script Python Executable section that you can use to customize which Python executable is used to run the script, if the node is running work items out of process. By default it uses hython, but you can tell it use plain Python or a custom path instead.
As for running with hython, the Python Script Python Executable section that you can use to customize which Python executable is used to run the script, if the node is running work items out of process. By default it uses hython, but you can tell it use plain Python or a custom path instead.
PDG/TOPs » Simplest way to add a dependency in the Python TOP
- tpetrick
- 596 posts
- Offline
Processor nodes have a Pre Cook callback which might do what you need, but it isn't exposed on the Python Processor TOP: https://www.sidefx.com/docs/houdini/tops/processors.html#onprecook-pdg-result [www.sidefx.com] It's currently only available if you write a custom node, but we can easily promote it up onto the Python Processor if you think that would be useful. Please file an RFE for that.
I do have a follow up question to your use case though. What happens if you cancel the cook with half of the work items completed, and then cook the node again to trigger the remaining work items? It sounds like you'd probably NOT want to run the pre cook logic in that case, even though it's technically a new cook, otherwise it would delete the outputs produced by the work items that were already cooked. So what you'd actually need is a "pre cook, but only before any work items cook" operation.
It might be tempting to assume that's the same as running the pre cook as part of the first work item, but work items don't necesasrily cook in order. The first logical item work in the node may infact be the last one to start cooking, depending on when its input tasks actually finish. Or the first one might fail, and then be cooked a second time at a later point due to scheduler retry settings, etc.
Adding a dependency between the first item and the remaning items in the node would fix that, as you were trying to do, but it would mean that any change to the first work item's state would effectively dirty the whole node. And every other work item in the node would end up having to wait on the first work item, which could end up being the same thing as waiting for all inputs to cook depending on the cook order of the input work items.
Generally speaking, PDG is designed to encourage work items that operate on local state only, i.e. the attributes and files associated with the work item. That way things in the same node can be run in parallel and in any order, once their dependencies are satisfied.
In other words, I'm not sure if there's a way to do exactly you want, depending on what's expected in the case were a node is only partially cooked and resumed. I think adding in pre/post hooks or a pre/post node is valid RFE, but the exact behavior needs to be well defined.
I do have a follow up question to your use case though. What happens if you cancel the cook with half of the work items completed, and then cook the node again to trigger the remaining work items? It sounds like you'd probably NOT want to run the pre cook logic in that case, even though it's technically a new cook, otherwise it would delete the outputs produced by the work items that were already cooked. So what you'd actually need is a "pre cook, but only before any work items cook" operation.
It might be tempting to assume that's the same as running the pre cook as part of the first work item, but work items don't necesasrily cook in order. The first logical item work in the node may infact be the last one to start cooking, depending on when its input tasks actually finish. Or the first one might fail, and then be cooked a second time at a later point due to scheduler retry settings, etc.
Adding a dependency between the first item and the remaning items in the node would fix that, as you were trying to do, but it would mean that any change to the first work item's state would effectively dirty the whole node. And every other work item in the node would end up having to wait on the first work item, which could end up being the same thing as waiting for all inputs to cook depending on the cook order of the input work items.
Generally speaking, PDG is designed to encourage work items that operate on local state only, i.e. the attributes and files associated with the work item. That way things in the same node can be run in parallel and in any order, once their dependencies are satisfied.
In other words, I'm not sure if there's a way to do exactly you want, depending on what's expected in the case were a node is only partially cooked and resumed. I think adding in pre/post hooks or a pre/post node is valid RFE, but the exact behavior needs to be well defined.
Edited by tpetrick - Jan. 25, 2023 10:59:35
PDG/TOPs » Simplest way to add a dependency in the Python TOP
- tpetrick
- 596 posts
- Offline
In order to do that, you need the node generate all of it's work items in one go.
If you set "Generate When" to "All Upstream Items are Cooked", then your node will generate all its work items with a single onGenerate call, once all upstream work items are cooked. It will then invoke onAddInternalDependencies once as well, on the full list of generated work items, which will allow you to add the dependencies you need.
If your node only needs the inputs to be generated in order to create its work items, then you can use "All Upstream Items are Generated" instead. But you have to use one of those options to force the node to do it's generation/internal dependencies for the whole node at once.
If you set "Generate When" to "All Upstream Items are Cooked", then your node will generate all its work items with a single onGenerate call, once all upstream work items are cooked. It will then invoke onAddInternalDependencies once as well, on the full list of generated work items, which will allow you to add the dependencies you need.
If your node only needs the inputs to be generated in order to create its work items, then you can use "All Upstream Items are Generated" instead. But you have to use one of those options to force the node to do it's generation/internal dependencies for the whole node at once.
Edited by tpetrick - Jan. 24, 2023 17:37:54
PDG/TOPs » Simplest way to add a dependency in the Python TOP
- tpetrick
- 596 posts
- Offline
Node callbacks are only permitted to access the local variables passed into them. In this case, the callback is only allowed to add dependencies between the work items from internal_items list.
This is especially important if your node is dynamic. A dynamic node generates work items each time an input work item cooks, and it might be doing that in parallel. Therefore, the list of work items on the node itself may be incomplete at any point in time.
It's important to note that onAddDependencies is called each time the node generates work item, not once per node. For example if your node is dynamic and has 10 input itemss, onGenerate will be called 10 times (once for each cooked input work item). onAddDependencies will also be called 10 times as well, once for each of the lists of work items produced by the corresponding onGenerate call.
You will need to change your Generate When parameter to "All Upstream Items are Generated" or "All Upstream Items are Cooked" if you need to access all work items at the same time.
This is especially important if your node is dynamic. A dynamic node generates work items each time an input work item cooks, and it might be doing that in parallel. Therefore, the list of work items on the node itself may be incomplete at any point in time.
It's important to note that onAddDependencies is called each time the node generates work item, not once per node. For example if your node is dynamic and has 10 input itemss, onGenerate will be called 10 times (once for each cooked input work item). onAddDependencies will also be called 10 times as well, once for each of the lists of work items produced by the corresponding onGenerate call.
You will need to change your Generate When parameter to "All Upstream Items are Generated" or "All Upstream Items are Cooked" if you need to access all work items at the same time.
PDG/TOPs » Mantra is failing in PDG network
- tpetrick
- 596 posts
- Offline
That issue is occurring because the script PDG uses to cook the Mantra ROP is trying to set parameters on the TOP node, but your TOP network is inside of a locked asset and therefore the TOP node cannot be edited.
However, that issue was fixed in H19.0.657 or newer -- from your output log it looks like you're using 19.0.455.
However, that issue was fixed in H19.0.657 or newer -- from your output log it looks like you're using 19.0.455.
PDG/TOPs » 19.5 longer wait time than 19.0 PDG cook time
- tpetrick
- 596 posts
- Offline
Are you able to attach your scene file/hda? For me, a simple box .hda actually cooks faster in a clean 19.5 install vs 19.0 due to faster hython start-up time -- the same HDA takes ~3.1s to cook in 19.0 and ~1.6s in 19.5.
Note that work items in the HDA Processor node run out of process by default -- this means each work item starts its own hython session that loads the .hda, cooks it, and writes its outputs to disk. Anything that impacts the Houdini's startup time, e.g. loading packages, will also impact work item cook times. You can use Services [www.sidefx.com] to create a pool of pre-started worker processes to avoid that, which is especially useful for light weight HDAs.
Note that work items in the HDA Processor node run out of process by default -- this means each work item starts its own hython session that loads the .hda, cooks it, and writes its outputs to disk. Anything that impacts the Houdini's startup time, e.g. loading packages, will also impact work item cook times. You can use Services [www.sidefx.com] to create a pool of pre-started worker processes to avoid that, which is especially useful for light weight HDAs.
Edited by tpetrick - Dec. 9, 2022 13:53:32
PDG/TOPs » "While Loop", can we make TOP listen and act
- tpetrick
- 596 posts
- Offline
That code should still work, but you won't be able to trigger a cook of a TOP network from within a Python Processor that's evaluating in the same network. You'll need to run it from e.g. a button callback, shelf tool or the Python shell.
What issues/errors are you running into?
What issues/errors are you running into?
PDG/TOPs » Command line run tops wont work with mantra or ropfetch
- tpetrick
- 596 posts
- Offline
Your .hip file was last saved in H19.0, but you're loading it in an H19.5 version of Hython. It looks like there are some slight parameter differences on the Mantra node that are causing load warnings. By default, hou.hipFile.load(..) treats warnings as an error, but you an fix that by passing ignore_load_warnings=True into the load function as described here: https://www.sidefx.com/docs/houdini/hom/hou/hipFile.html#load [www.sidefx.com]
If you load your file in H19.5 and resave it that'll also clear up the warnings on future loads.
If you load your file in H19.5 and resave it that'll also clear up the warnings on future loads.
Edited by tpetrick - Nov. 15, 2022 12:26:53
PDG/TOPs » Outputting progress
- tpetrick
- 596 posts
- Offline
If this is a Python Script TOP, you an use work_item.frame to access the frame value associated with the work item, which will be set to the same frame as the parent work item that's doing the render.
-
- Quick Links