You can point a ROP Fetch TOP node at a chain of ROPs, instead of a single ROP, and it'll render the whole chain and report the output files for each of the ROPs.
To export multiple textures you can create a chain of multiple Composite ROPs and point the ROP Fetch TOP at the last node in the chain.
Found 438 posts.
Search results Show results as topic list.
PDG/TOPs » Export multiple COPs textures
- tpetrick
- 585 posts
- Offline
PDG/TOPs » TOPs ffmpeg audio
- tpetrick
- 585 posts
- Offline
Yep, in a parameter that's configured to use Python for its expression language, you can use pdg.input(..) to do the same thing as @pdg_input. There's also pdg.workItem() which will return the pdg.WorkItem instance that's being used to evaluate the parameter. Those functions are described at the very top of the pdg module doc: https://www.sidefx.com/docs/houdini/tops/pdg/#input [www.sidefx.com]
Edited by tpetrick - 2023年2月27日 19:32:24
PDG/TOPs » TOPs ffmpeg audio
- tpetrick
- 585 posts
- Offline
@pdg_output refers to the output of the current work item itself. If you want to use the output path produced by an upstream work item, you should use @pdg_input.
PDG/TOPs » Understanding TopNode.cookOutputWorkItems()
- tpetrick
- 585 posts
- Offline
If top_only is True then PDG does not cook at a all. Only the TOP nodes cooks -- this generates the underlying PDG nodes that corresponds to the TOP nodes, but does not create any work items.
The main use is so that you can call top_node.getPDGNode() on a TOP node that has not yet cooked, without actually creating any work items. If you put down a new TOP and call that method you'll notice it returns nothing -- that's because the TOP node hasn't cooked, and therefore has no underlying PDG node yet. Triggering a cook with top_only on that graph will ensure that all TOP nodes have a PDG node.
The main use is so that you can call top_node.getPDGNode() on a TOP node that has not yet cooked, without actually creating any work items. If you put down a new TOP and call that method you'll notice it returns nothing -- that's because the TOP node hasn't cooked, and therefore has no underlying PDG node yet. Triggering a cook with top_only on that graph will ensure that all TOP nodes have a PDG node.
Edited by tpetrick - 2023年2月22日 11:00:47
PDG/TOPs » Q: submit deadline job without needing Hython on the worker
- tpetrick
- 585 posts
- Offline
The Python Script generates one work item for each input work item, and configures that work item to run the script code you specified on the node. The Python Processor node on the other hand allows you to actually define how work items are generated, e.g. you can generate 3 work items from each input, or a random number of work items for each input work item, or some custom logic based on parameters. It's useful if you want to e.g. fan out the number of work items you have, as opposed to the Python Script which is always just a one-to-one relationship with the input.
As for running with hython, the Python Script Python Executable section that you can use to customize which Python executable is used to run the script, if the node is running work items out of process. By default it uses hython, but you can tell it use plain Python or a custom path instead.
As for running with hython, the Python Script Python Executable section that you can use to customize which Python executable is used to run the script, if the node is running work items out of process. By default it uses hython, but you can tell it use plain Python or a custom path instead.
PDG/TOPs » Simplest way to add a dependency in the Python TOP
- tpetrick
- 585 posts
- Offline
Processor nodes have a Pre Cook callback which might do what you need, but it isn't exposed on the Python Processor TOP: https://www.sidefx.com/docs/houdini/tops/processors.html#onprecook-pdg-result [www.sidefx.com] It's currently only available if you write a custom node, but we can easily promote it up onto the Python Processor if you think that would be useful. Please file an RFE for that.
I do have a follow up question to your use case though. What happens if you cancel the cook with half of the work items completed, and then cook the node again to trigger the remaining work items? It sounds like you'd probably NOT want to run the pre cook logic in that case, even though it's technically a new cook, otherwise it would delete the outputs produced by the work items that were already cooked. So what you'd actually need is a "pre cook, but only before any work items cook" operation.
It might be tempting to assume that's the same as running the pre cook as part of the first work item, but work items don't necesasrily cook in order. The first logical item work in the node may infact be the last one to start cooking, depending on when its input tasks actually finish. Or the first one might fail, and then be cooked a second time at a later point due to scheduler retry settings, etc.
Adding a dependency between the first item and the remaning items in the node would fix that, as you were trying to do, but it would mean that any change to the first work item's state would effectively dirty the whole node. And every other work item in the node would end up having to wait on the first work item, which could end up being the same thing as waiting for all inputs to cook depending on the cook order of the input work items.
Generally speaking, PDG is designed to encourage work items that operate on local state only, i.e. the attributes and files associated with the work item. That way things in the same node can be run in parallel and in any order, once their dependencies are satisfied.
In other words, I'm not sure if there's a way to do exactly you want, depending on what's expected in the case were a node is only partially cooked and resumed. I think adding in pre/post hooks or a pre/post node is valid RFE, but the exact behavior needs to be well defined.
I do have a follow up question to your use case though. What happens if you cancel the cook with half of the work items completed, and then cook the node again to trigger the remaining work items? It sounds like you'd probably NOT want to run the pre cook logic in that case, even though it's technically a new cook, otherwise it would delete the outputs produced by the work items that were already cooked. So what you'd actually need is a "pre cook, but only before any work items cook" operation.
It might be tempting to assume that's the same as running the pre cook as part of the first work item, but work items don't necesasrily cook in order. The first logical item work in the node may infact be the last one to start cooking, depending on when its input tasks actually finish. Or the first one might fail, and then be cooked a second time at a later point due to scheduler retry settings, etc.
Adding a dependency between the first item and the remaning items in the node would fix that, as you were trying to do, but it would mean that any change to the first work item's state would effectively dirty the whole node. And every other work item in the node would end up having to wait on the first work item, which could end up being the same thing as waiting for all inputs to cook depending on the cook order of the input work items.
Generally speaking, PDG is designed to encourage work items that operate on local state only, i.e. the attributes and files associated with the work item. That way things in the same node can be run in parallel and in any order, once their dependencies are satisfied.
In other words, I'm not sure if there's a way to do exactly you want, depending on what's expected in the case were a node is only partially cooked and resumed. I think adding in pre/post hooks or a pre/post node is valid RFE, but the exact behavior needs to be well defined.
Edited by tpetrick - 2023年1月25日 10:59:35
PDG/TOPs » Simplest way to add a dependency in the Python TOP
- tpetrick
- 585 posts
- Offline
In order to do that, you need the node generate all of it's work items in one go.
If you set "Generate When" to "All Upstream Items are Cooked", then your node will generate all its work items with a single onGenerate call, once all upstream work items are cooked. It will then invoke onAddInternalDependencies once as well, on the full list of generated work items, which will allow you to add the dependencies you need.
If your node only needs the inputs to be generated in order to create its work items, then you can use "All Upstream Items are Generated" instead. But you have to use one of those options to force the node to do it's generation/internal dependencies for the whole node at once.
If you set "Generate When" to "All Upstream Items are Cooked", then your node will generate all its work items with a single onGenerate call, once all upstream work items are cooked. It will then invoke onAddInternalDependencies once as well, on the full list of generated work items, which will allow you to add the dependencies you need.
If your node only needs the inputs to be generated in order to create its work items, then you can use "All Upstream Items are Generated" instead. But you have to use one of those options to force the node to do it's generation/internal dependencies for the whole node at once.
Edited by tpetrick - 2023年1月24日 17:37:54
PDG/TOPs » Simplest way to add a dependency in the Python TOP
- tpetrick
- 585 posts
- Offline
Node callbacks are only permitted to access the local variables passed into them. In this case, the callback is only allowed to add dependencies between the work items from internal_items list.
This is especially important if your node is dynamic. A dynamic node generates work items each time an input work item cooks, and it might be doing that in parallel. Therefore, the list of work items on the node itself may be incomplete at any point in time.
It's important to note that onAddDependencies is called each time the node generates work item, not once per node. For example if your node is dynamic and has 10 input itemss, onGenerate will be called 10 times (once for each cooked input work item). onAddDependencies will also be called 10 times as well, once for each of the lists of work items produced by the corresponding onGenerate call.
You will need to change your Generate When parameter to "All Upstream Items are Generated" or "All Upstream Items are Cooked" if you need to access all work items at the same time.
This is especially important if your node is dynamic. A dynamic node generates work items each time an input work item cooks, and it might be doing that in parallel. Therefore, the list of work items on the node itself may be incomplete at any point in time.
It's important to note that onAddDependencies is called each time the node generates work item, not once per node. For example if your node is dynamic and has 10 input itemss, onGenerate will be called 10 times (once for each cooked input work item). onAddDependencies will also be called 10 times as well, once for each of the lists of work items produced by the corresponding onGenerate call.
You will need to change your Generate When parameter to "All Upstream Items are Generated" or "All Upstream Items are Cooked" if you need to access all work items at the same time.
PDG/TOPs » Mantra is failing in PDG network
- tpetrick
- 585 posts
- Offline
That issue is occurring because the script PDG uses to cook the Mantra ROP is trying to set parameters on the TOP node, but your TOP network is inside of a locked asset and therefore the TOP node cannot be edited.
However, that issue was fixed in H19.0.657 or newer -- from your output log it looks like you're using 19.0.455.
However, that issue was fixed in H19.0.657 or newer -- from your output log it looks like you're using 19.0.455.
PDG/TOPs » 19.5 longer wait time than 19.0 PDG cook time
- tpetrick
- 585 posts
- Offline
Are you able to attach your scene file/hda? For me, a simple box .hda actually cooks faster in a clean 19.5 install vs 19.0 due to faster hython start-up time -- the same HDA takes ~3.1s to cook in 19.0 and ~1.6s in 19.5.
Note that work items in the HDA Processor node run out of process by default -- this means each work item starts its own hython session that loads the .hda, cooks it, and writes its outputs to disk. Anything that impacts the Houdini's startup time, e.g. loading packages, will also impact work item cook times. You can use Services [www.sidefx.com] to create a pool of pre-started worker processes to avoid that, which is especially useful for light weight HDAs.
Note that work items in the HDA Processor node run out of process by default -- this means each work item starts its own hython session that loads the .hda, cooks it, and writes its outputs to disk. Anything that impacts the Houdini's startup time, e.g. loading packages, will also impact work item cook times. You can use Services [www.sidefx.com] to create a pool of pre-started worker processes to avoid that, which is especially useful for light weight HDAs.
Edited by tpetrick - 2022年12月9日 13:53:32
PDG/TOPs » "While Loop", can we make TOP listen and act
- tpetrick
- 585 posts
- Offline
That code should still work, but you won't be able to trigger a cook of a TOP network from within a Python Processor that's evaluating in the same network. You'll need to run it from e.g. a button callback, shelf tool or the Python shell.
What issues/errors are you running into?
What issues/errors are you running into?
PDG/TOPs » Command line run tops wont work with mantra or ropfetch
- tpetrick
- 585 posts
- Offline
Your .hip file was last saved in H19.0, but you're loading it in an H19.5 version of Hython. It looks like there are some slight parameter differences on the Mantra node that are causing load warnings. By default, hou.hipFile.load(..) treats warnings as an error, but you an fix that by passing ignore_load_warnings=True into the load function as described here: https://www.sidefx.com/docs/houdini/hom/hou/hipFile.html#load [www.sidefx.com]
If you load your file in H19.5 and resave it that'll also clear up the warnings on future loads.
If you load your file in H19.5 and resave it that'll also clear up the warnings on future loads.
Edited by tpetrick - 2022年11月15日 12:26:53
PDG/TOPs » Outputting progress
- tpetrick
- 585 posts
- Offline
If this is a Python Script TOP, you an use work_item.frame to access the frame value associated with the work item, which will be set to the same frame as the parent work item that's doing the render.
PDG/TOPs » Partition by Combination TOP
- tpetrick
- 585 posts
- Offline
You should be able to use a Partition by Node, with the partitioning mode set to Input Work Item Combination. The Partition by Combination TOP treats all inputs as a flat list of work items, however the Partition by Node will group work items by the input node they original came from.
PDG/TOPs » Unknown PDG scheduler type: tractorscheduler ??
- tpetrick
- 585 posts
- Offline
This likely means that the Tractor Python library isn't loaded properly. You can verify that by opening a Python Shell in your Houdini session, and attempting to import one of the components of the Python module. For example, verify that import tractor.api.author succeeds.
Note that if you're using a Python 3 build of Houdini, you'll also need to use the Python 3 version of the Tractor Python API or it won't be possible for Houdini to import it.
Note that if you're using a Python 3 build of Houdini, you'll also need to use the Python 3 version of the Tractor Python API or it won't be possible for Houdini to import it.
Edited by tpetrick - 2022年10月17日 12:37:14
PDG/TOPs » view code generated by the generic generator
- tpetrick
- 585 posts
- Offline
If you want to see the command that's generated for a particular work item, you can ctrl + middle-mouse click on that work item in the TOPs UI. The command line string for the work item will be listed directly in the attribute panel in H19.0 and earlier, and under the Cook Info section in H19.5. That applies to all nodes, not just the generic generator.
3rd Party » Reshift slower when fetching from pdg
- tpetrick
- 585 posts
- Offline
A few things to keep in mind:
If those suggestions don't help your track down the issue, then please log a bug with a .hip file that reproduces the issue, the logs for your work item(s), and the specific version of Houdini and Redshift you're using.
- PDG work items cook out of process -- a ROP Fetch task will run a Hython process that loads the scene file and cooks the target ROP node. If you're not caching out your geometry to disk, then the render will have to trigger a cook of the geometry before rendering. In other words, ideally your scene file should be set up in such a way that the render tasks can just load the cached geometry from disk instead of having to recreate it.
- Check your scheduler settings to make sure you're not limiting the number of threads that jobs can use.
- Compare the performance with rendering your ROP using hbatch. This is a more accurate comparison than cooking in a live session, since the scene is loaded and cooked from scratch. For example, something like:
hbatch myscene.hip
>>> render /path/to/redshift_rop - The ROP Fetch TOP has an option to enable Performance Monitor logging for the work items. Comparing the perf mon output between a PDG task and a regular render in a graphical session may help to determine where the extra time is being spent
If those suggestions don't help your track down the issue, then please log a bug with a .hip file that reproduces the issue, the logs for your work item(s), and the specific version of Houdini and Redshift you're using.
PDG/TOPs » HDA Processor not pressing button parameter
- tpetrick
- 585 posts
- Offline
I did a quick test in H19.5.368 with an asset that has a button that just prints a message, and it seems to be working as expected. A few things to note:
If neither of those is the issue, then please attach a standalone .hip file + .hda file that reproduces the problem. Also, let us know which version of Houdini you're using.
- The print statement will appear in the work item's log, e.g. the log that you can view by Ctrl+Middle Mouse clicking on a work item dot in the UI. It won't appear in the shell that started Houdini since the HDA Processor work item runs in its own process, and the log from that process is captured and displayed on the work item dot itself.
- If the work item cooks from cache files on disk, the print won't appear since the work item doesn't actually cook/do anything.
If neither of those is the issue, then please attach a standalone .hip file + .hda file that reproduces the problem. Also, let us know which version of Houdini you're using.
PDG/TOPs » why topnet cook time increase as number of complex node?
- tpetrick
- 585 posts
- Offline
tamte
additionally to using services consider switching to H19.5
H19.5 has delayed syncing of HDA nodes, so upon loading it will not even try to load their content until the node needs to be cooked so it should be pretty fast to open scenes with tons of complex nodes that are not in a cook path
This is an excellent point. I was actually testing with 19.5 for the numbers I posted -- in 19.0 the .hip file takes almost 30 seconds to load for me.
PDG/TOPs » why topnet cook time increase as number of complex node?
- tpetrick
- 585 posts
- Offline
Work items in the ROP Geometry TOP cook out of process -- that process loads the .hip file and cooks the ROP. That means the loading time of the .hip file affects the total cook time of the work item.
The output log for the work item -- which can be found by middle mouse clicking on it -- has a detailed break down of what the work item was doing, at what time.
For example, with no extra nodes in the scene it took ~0.4 seconds:
And with all of the Erode nodes loading the .hip took ~0.8 seconds, which accounts for the difference in cook time as well:
You can mitigate this by turning on batching on the ROP Geometry, which will cook multiple work items in the same process. Or use services to create long-running processes that load the .hip once, and evaluate multiple tasks on the service process over the course of the graph cook: https://www.sidefx.com/docs/houdini/tops/services.html [www.sidefx.com]
The output log for the work item -- which can be found by middle mouse clicking on it -- has a detailed break down of what the work item was doing, at what time.
For example, with no extra nodes in the scene it took ~0.4 seconds:
[10:05:27.352] Loading .hip file '/home/taylor/temp/test.hip'...
[10:05:27.769] .hip file done loading
And with all of the Erode nodes loading the .hip took ~0.8 seconds, which accounts for the difference in cook time as well:
[10:04:34.827] Loading .hip file '/home/taylor/temp/test.hip'...
[10:04:35.628] .hip file done loading
You can mitigate this by turning on batching on the ROP Geometry, which will cook multiple work items in the same process. Or use services to create long-running processes that load the .hip once, and evaluate multiple tasks on the service process over the course of the graph cook: https://www.sidefx.com/docs/houdini/tops/services.html [www.sidefx.com]
Edited by tpetrick - 2022年8月18日 10:12:23
-
- Quick Links