Processor nodes have a Pre Cook callback which might do what you need, but it isn't exposed on the Python Processor TOP: https://www.sidefx.com/docs/houdini/tops/processors.html#onprecook-pdg-result [www.sidefx.com] It's currently only available if you write a custom node, but we can easily promote it up onto the Python Processor if you think that would be useful. Please file an RFE for that.
I do have a follow up question to your use case though. What happens if you cancel the cook with half of the work items completed, and then cook the node again to trigger the remaining work items? It sounds like you'd probably NOT want to run the pre cook logic in that case, even though it's technically a new cook, otherwise it would delete the outputs produced by the work items that were already cooked. So what you'd actually need is a "pre cook, but only before any work items cook" operation.
It might be tempting to assume that's the same as running the pre cook as part of the first work item, but work items don't necesasrily cook in order. The first logical item work in the node may infact be the last one to start cooking, depending on when its input tasks actually finish. Or the first one might fail, and then be cooked a second time at a later point due to scheduler retry settings, etc.
Adding a dependency between the first item and the remaning items in the node would fix that, as you were trying to do, but it would mean that any change to the first work item's state would effectively dirty the whole node. And every other work item in the node would end up having to wait on the first work item, which could end up being the same thing as waiting for all inputs to cook depending on the cook order of the input work items.
Generally speaking, PDG is designed to encourage work items that operate on local state only, i.e. the attributes and files associated with the work item. That way things in the same node can be run in parallel and in any order, once their dependencies are satisfied.
In other words, I'm not sure if there's a way to do exactly you want, depending on what's expected in the case were a node is only partially cooked and resumed. I think adding in pre/post hooks or a pre/post node is valid RFE, but the exact behavior needs to be well defined.
Found 413 posts.
Search results Show results as topic list.
PDG/TOPs » Simplest way to add a dependency in the Python TOP
-
- tpetrick
- 548 posts
- Offline
PDG/TOPs » Simplest way to add a dependency in the Python TOP
-
- tpetrick
- 548 posts
- Offline
In order to do that, you need the node generate all of it's work items in one go.
If you set "Generate When" to "All Upstream Items are Cooked", then your node will generate all its work items with a single onGenerate call, once all upstream work items are cooked. It will then invoke onAddInternalDependencies once as well, on the full list of generated work items, which will allow you to add the dependencies you need.
If your node only needs the inputs to be generated in order to create its work items, then you can use "All Upstream Items are Generated" instead. But you have to use one of those options to force the node to do it's generation/internal dependencies for the whole node at once.
If you set "Generate When" to "All Upstream Items are Cooked", then your node will generate all its work items with a single onGenerate call, once all upstream work items are cooked. It will then invoke onAddInternalDependencies once as well, on the full list of generated work items, which will allow you to add the dependencies you need.
If your node only needs the inputs to be generated in order to create its work items, then you can use "All Upstream Items are Generated" instead. But you have to use one of those options to force the node to do it's generation/internal dependencies for the whole node at once.
Edited by tpetrick - 2023年1月24日 17:37:54
PDG/TOPs » Simplest way to add a dependency in the Python TOP
-
- tpetrick
- 548 posts
- Offline
Node callbacks are only permitted to access the local variables passed into them. In this case, the callback is only allowed to add dependencies between the work items from internal_items list.
This is especially important if your node is dynamic. A dynamic node generates work items each time an input work item cooks, and it might be doing that in parallel. Therefore, the list of work items on the node itself may be incomplete at any point in time.
It's important to note that onAddDependencies is called each time the node generates work item, not once per node. For example if your node is dynamic and has 10 input itemss, onGenerate will be called 10 times (once for each cooked input work item). onAddDependencies will also be called 10 times as well, once for each of the lists of work items produced by the corresponding onGenerate call.
You will need to change your Generate When parameter to "All Upstream Items are Generated" or "All Upstream Items are Cooked" if you need to access all work items at the same time.
This is especially important if your node is dynamic. A dynamic node generates work items each time an input work item cooks, and it might be doing that in parallel. Therefore, the list of work items on the node itself may be incomplete at any point in time.
It's important to note that onAddDependencies is called each time the node generates work item, not once per node. For example if your node is dynamic and has 10 input itemss, onGenerate will be called 10 times (once for each cooked input work item). onAddDependencies will also be called 10 times as well, once for each of the lists of work items produced by the corresponding onGenerate call.
You will need to change your Generate When parameter to "All Upstream Items are Generated" or "All Upstream Items are Cooked" if you need to access all work items at the same time.
PDG/TOPs » Mantra is failing in PDG network
-
- tpetrick
- 548 posts
- Offline
That issue is occurring because the script PDG uses to cook the Mantra ROP is trying to set parameters on the TOP node, but your TOP network is inside of a locked asset and therefore the TOP node cannot be edited.
However, that issue was fixed in H19.0.657 or newer -- from your output log it looks like you're using 19.0.455.
However, that issue was fixed in H19.0.657 or newer -- from your output log it looks like you're using 19.0.455.
PDG/TOPs » 19.5 longer wait time than 19.0 PDG cook time
-
- tpetrick
- 548 posts
- Offline
Are you able to attach your scene file/hda? For me, a simple box .hda actually cooks faster in a clean 19.5 install vs 19.0 due to faster hython start-up time -- the same HDA takes ~3.1s to cook in 19.0 and ~1.6s in 19.5.
Note that work items in the HDA Processor node run out of process by default -- this means each work item starts its own hython session that loads the .hda, cooks it, and writes its outputs to disk. Anything that impacts the Houdini's startup time, e.g. loading packages, will also impact work item cook times. You can use Services [www.sidefx.com] to create a pool of pre-started worker processes to avoid that, which is especially useful for light weight HDAs.
Note that work items in the HDA Processor node run out of process by default -- this means each work item starts its own hython session that loads the .hda, cooks it, and writes its outputs to disk. Anything that impacts the Houdini's startup time, e.g. loading packages, will also impact work item cook times. You can use Services [www.sidefx.com] to create a pool of pre-started worker processes to avoid that, which is especially useful for light weight HDAs.
Edited by tpetrick - 2022年12月9日 13:53:32
PDG/TOPs » "While Loop", can we make TOP listen and act
-
- tpetrick
- 548 posts
- Offline
That code should still work, but you won't be able to trigger a cook of a TOP network from within a Python Processor that's evaluating in the same network. You'll need to run it from e.g. a button callback, shelf tool or the Python shell.
What issues/errors are you running into?
What issues/errors are you running into?
PDG/TOPs » Command line run tops wont work with mantra or ropfetch
-
- tpetrick
- 548 posts
- Offline
Your .hip file was last saved in H19.0, but you're loading it in an H19.5 version of Hython. It looks like there are some slight parameter differences on the Mantra node that are causing load warnings. By default, hou.hipFile.load(..) treats warnings as an error, but you an fix that by passing ignore_load_warnings=True into the load function as described here: https://www.sidefx.com/docs/houdini/hom/hou/hipFile.html#load [www.sidefx.com]
If you load your file in H19.5 and resave it that'll also clear up the warnings on future loads.
If you load your file in H19.5 and resave it that'll also clear up the warnings on future loads.
Edited by tpetrick - 2022年11月15日 12:26:53
PDG/TOPs » Outputting progress
-
- tpetrick
- 548 posts
- Offline
If this is a Python Script TOP, you an use work_item.frame to access the frame value associated with the work item, which will be set to the same frame as the parent work item that's doing the render.
PDG/TOPs » Partition by Combination TOP
-
- tpetrick
- 548 posts
- Offline
You should be able to use a Partition by Node, with the partitioning mode set to Input Work Item Combination. The Partition by Combination TOP treats all inputs as a flat list of work items, however the Partition by Node will group work items by the input node they original came from.
PDG/TOPs » Unknown PDG scheduler type: tractorscheduler ??
-
- tpetrick
- 548 posts
- Offline
This likely means that the Tractor Python library isn't loaded properly. You can verify that by opening a Python Shell in your Houdini session, and attempting to import one of the components of the Python module. For example, verify that import tractor.api.author succeeds.
Note that if you're using a Python 3 build of Houdini, you'll also need to use the Python 3 version of the Tractor Python API or it won't be possible for Houdini to import it.
Note that if you're using a Python 3 build of Houdini, you'll also need to use the Python 3 version of the Tractor Python API or it won't be possible for Houdini to import it.
Edited by tpetrick - 2022年10月17日 12:37:14
PDG/TOPs » view code generated by the generic generator
-
- tpetrick
- 548 posts
- Offline
If you want to see the command that's generated for a particular work item, you can ctrl + middle-mouse click on that work item in the TOPs UI. The command line string for the work item will be listed directly in the attribute panel in H19.0 and earlier, and under the Cook Info section in H19.5. That applies to all nodes, not just the generic generator.
3rd Party » Reshift slower when fetching from pdg
-
- tpetrick
- 548 posts
- Offline
A few things to keep in mind:
If those suggestions don't help your track down the issue, then please log a bug with a .hip file that reproduces the issue, the logs for your work item(s), and the specific version of Houdini and Redshift you're using.
- PDG work items cook out of process -- a ROP Fetch task will run a Hython process that loads the scene file and cooks the target ROP node. If you're not caching out your geometry to disk, then the render will have to trigger a cook of the geometry before rendering. In other words, ideally your scene file should be set up in such a way that the render tasks can just load the cached geometry from disk instead of having to recreate it.
- Check your scheduler settings to make sure you're not limiting the number of threads that jobs can use.
- Compare the performance with rendering your ROP using hbatch. This is a more accurate comparison than cooking in a live session, since the scene is loaded and cooked from scratch. For example, something like:
hbatch myscene.hip
>>> render /path/to/redshift_rop - The ROP Fetch TOP has an option to enable Performance Monitor logging for the work items. Comparing the perf mon output between a PDG task and a regular render in a graphical session may help to determine where the extra time is being spent
If those suggestions don't help your track down the issue, then please log a bug with a .hip file that reproduces the issue, the logs for your work item(s), and the specific version of Houdini and Redshift you're using.
PDG/TOPs » HDA Processor not pressing button parameter
-
- tpetrick
- 548 posts
- Offline
I did a quick test in H19.5.368 with an asset that has a button that just prints a message, and it seems to be working as expected. A few things to note:
If neither of those is the issue, then please attach a standalone .hip file + .hda file that reproduces the problem. Also, let us know which version of Houdini you're using.
- The print statement will appear in the work item's log, e.g. the log that you can view by Ctrl+Middle Mouse clicking on a work item dot in the UI. It won't appear in the shell that started Houdini since the HDA Processor work item runs in its own process, and the log from that process is captured and displayed on the work item dot itself.
- If the work item cooks from cache files on disk, the print won't appear since the work item doesn't actually cook/do anything.
If neither of those is the issue, then please attach a standalone .hip file + .hda file that reproduces the problem. Also, let us know which version of Houdini you're using.
PDG/TOPs » why topnet cook time increase as number of complex node?
-
- tpetrick
- 548 posts
- Offline
tamte
additionally to using services consider switching to H19.5
H19.5 has delayed syncing of HDA nodes, so upon loading it will not even try to load their content until the node needs to be cooked so it should be pretty fast to open scenes with tons of complex nodes that are not in a cook path
This is an excellent point. I was actually testing with 19.5 for the numbers I posted -- in 19.0 the .hip file takes almost 30 seconds to load for me.
PDG/TOPs » why topnet cook time increase as number of complex node?
-
- tpetrick
- 548 posts
- Offline
Work items in the ROP Geometry TOP cook out of process -- that process loads the .hip file and cooks the ROP. That means the loading time of the .hip file affects the total cook time of the work item.
The output log for the work item -- which can be found by middle mouse clicking on it -- has a detailed break down of what the work item was doing, at what time.
For example, with no extra nodes in the scene it took ~0.4 seconds:
And with all of the Erode nodes loading the .hip took ~0.8 seconds, which accounts for the difference in cook time as well:
You can mitigate this by turning on batching on the ROP Geometry, which will cook multiple work items in the same process. Or use services to create long-running processes that load the .hip once, and evaluate multiple tasks on the service process over the course of the graph cook: https://www.sidefx.com/docs/houdini/tops/services.html [www.sidefx.com]
The output log for the work item -- which can be found by middle mouse clicking on it -- has a detailed break down of what the work item was doing, at what time.
For example, with no extra nodes in the scene it took ~0.4 seconds:
[10:05:27.352] Loading .hip file '/home/taylor/temp/test.hip'...
[10:05:27.769] .hip file done loading
And with all of the Erode nodes loading the .hip took ~0.8 seconds, which accounts for the difference in cook time as well:
[10:04:34.827] Loading .hip file '/home/taylor/temp/test.hip'...
[10:04:35.628] .hip file done loading
You can mitigate this by turning on batching on the ROP Geometry, which will cook multiple work items in the same process. Or use services to create long-running processes that load the .hip once, and evaluate multiple tasks on the service process over the course of the graph cook: https://www.sidefx.com/docs/houdini/tops/services.html [www.sidefx.com]
Edited by tpetrick - 2022年8月18日 10:12:23
PDG/TOPs » why topnet cook time increase as number of complex node?
-
- tpetrick
- 548 posts
- Offline
Can you please attach an example .hip file that demonstrates the issue you're seeing? What nodes are actually inside of your TOP net?
PDG/TOPs » Controlling the names of groups/pieces returned from TopGeom
-
- tpetrick
- 548 posts
- Offline
The name of the group names/piece attribute values are named off the work item that the geometry came from. The number is part of the name -- it's the work item's unique ID.
In current builds there isn't a way to configure that, but I've added in an extra field to the node that can be used to set a custom name. Those changes are availble in tomorrow's daily builds of H19.5 and H19.0. You can use work item @attribs in the custom name like any other expression on a TOP node. For example, the current default for the group name field is `@pdg_name`. This results in the same behavior as before, but it's now expressed via the parameter instead of hardcoded.
In current builds there isn't a way to configure that, but I've added in an extra field to the node that can be used to set a custom name. Those changes are availble in tomorrow's daily builds of H19.5 and H19.0. You can use work item @attribs in the custom name like any other expression on a TOP node. For example, the current default for the group name field is `@pdg_name`. This results in the same behavior as before, but it's now expressed via the parameter instead of hardcoded.
PDG/TOPs » Houdini 19.5 and Deadline Scheduler
-
- tpetrick
- 548 posts
- Offline
This was a bug on our end. I checked in a fix for it yesterday evening, so it should be fixed in today's daily builds.
PDG/TOPs » PDG Work Items generating locally, not remotely
-
- tpetrick
- 548 posts
- Offline
Those logs are printed to the standard output of the shell that launched the Houdini process, not to a file on disk. They also need to be set in the environment when the Houdini process starts up.
Edited by tpetrick - 2022年8月9日 14:34:00
PDG/TOPs » PDG Work Items generating locally, not remotely
-
- tpetrick
- 548 posts
- Offline
Yep, it's documented in the list of env vars: https://www.sidefx.com/docs/houdini/ref/env#houdini_pdg_node_debug [www.sidefx.com]
There are a number of PDG-specific debug switches that can be enabled -- they should all be described on that page as well.
There are a number of PDG-specific debug switches that can be enabled -- they should all be described on that page as well.
-
- Quick Links