You should be able to use a Partition by Node, with the partitioning mode set to Input Work Item Combination. The Partition by Combination TOP treats all inputs as a flat list of work items, however the Partition by Node will group work items by the input node they original came from.
Found 445 posts.
Search results Show results as topic list.
PDG/TOPs » Partition by Combination TOP
- tpetrick
- 596 posts
- Offline
PDG/TOPs » Unknown PDG scheduler type: tractorscheduler ??
- tpetrick
- 596 posts
- Offline
This likely means that the Tractor Python library isn't loaded properly. You can verify that by opening a Python Shell in your Houdini session, and attempting to import one of the components of the Python module. For example, verify that import tractor.api.author succeeds.
Note that if you're using a Python 3 build of Houdini, you'll also need to use the Python 3 version of the Tractor Python API or it won't be possible for Houdini to import it.
Note that if you're using a Python 3 build of Houdini, you'll also need to use the Python 3 version of the Tractor Python API or it won't be possible for Houdini to import it.
Edited by tpetrick - Oct. 17, 2022 12:37:14
PDG/TOPs » view code generated by the generic generator
- tpetrick
- 596 posts
- Offline
If you want to see the command that's generated for a particular work item, you can ctrl + middle-mouse click on that work item in the TOPs UI. The command line string for the work item will be listed directly in the attribute panel in H19.0 and earlier, and under the Cook Info section in H19.5. That applies to all nodes, not just the generic generator.
3rd Party » Reshift slower when fetching from pdg
- tpetrick
- 596 posts
- Offline
A few things to keep in mind:
If those suggestions don't help your track down the issue, then please log a bug with a .hip file that reproduces the issue, the logs for your work item(s), and the specific version of Houdini and Redshift you're using.
- PDG work items cook out of process -- a ROP Fetch task will run a Hython process that loads the scene file and cooks the target ROP node. If you're not caching out your geometry to disk, then the render will have to trigger a cook of the geometry before rendering. In other words, ideally your scene file should be set up in such a way that the render tasks can just load the cached geometry from disk instead of having to recreate it.
- Check your scheduler settings to make sure you're not limiting the number of threads that jobs can use.
- Compare the performance with rendering your ROP using hbatch. This is a more accurate comparison than cooking in a live session, since the scene is loaded and cooked from scratch. For example, something like:
hbatch myscene.hip
>>> render /path/to/redshift_rop - The ROP Fetch TOP has an option to enable Performance Monitor logging for the work items. Comparing the perf mon output between a PDG task and a regular render in a graphical session may help to determine where the extra time is being spent
If those suggestions don't help your track down the issue, then please log a bug with a .hip file that reproduces the issue, the logs for your work item(s), and the specific version of Houdini and Redshift you're using.
PDG/TOPs » HDA Processor not pressing button parameter
- tpetrick
- 596 posts
- Offline
I did a quick test in H19.5.368 with an asset that has a button that just prints a message, and it seems to be working as expected. A few things to note:
If neither of those is the issue, then please attach a standalone .hip file + .hda file that reproduces the problem. Also, let us know which version of Houdini you're using.
- The print statement will appear in the work item's log, e.g. the log that you can view by Ctrl+Middle Mouse clicking on a work item dot in the UI. It won't appear in the shell that started Houdini since the HDA Processor work item runs in its own process, and the log from that process is captured and displayed on the work item dot itself.
- If the work item cooks from cache files on disk, the print won't appear since the work item doesn't actually cook/do anything.
If neither of those is the issue, then please attach a standalone .hip file + .hda file that reproduces the problem. Also, let us know which version of Houdini you're using.
PDG/TOPs » why topnet cook time increase as number of complex node?
- tpetrick
- 596 posts
- Offline
tamte
additionally to using services consider switching to H19.5
H19.5 has delayed syncing of HDA nodes, so upon loading it will not even try to load their content until the node needs to be cooked so it should be pretty fast to open scenes with tons of complex nodes that are not in a cook path
This is an excellent point. I was actually testing with 19.5 for the numbers I posted -- in 19.0 the .hip file takes almost 30 seconds to load for me.
PDG/TOPs » why topnet cook time increase as number of complex node?
- tpetrick
- 596 posts
- Offline
Work items in the ROP Geometry TOP cook out of process -- that process loads the .hip file and cooks the ROP. That means the loading time of the .hip file affects the total cook time of the work item.
The output log for the work item -- which can be found by middle mouse clicking on it -- has a detailed break down of what the work item was doing, at what time.
For example, with no extra nodes in the scene it took ~0.4 seconds:
And with all of the Erode nodes loading the .hip took ~0.8 seconds, which accounts for the difference in cook time as well:
You can mitigate this by turning on batching on the ROP Geometry, which will cook multiple work items in the same process. Or use services to create long-running processes that load the .hip once, and evaluate multiple tasks on the service process over the course of the graph cook: https://www.sidefx.com/docs/houdini/tops/services.html [www.sidefx.com]
The output log for the work item -- which can be found by middle mouse clicking on it -- has a detailed break down of what the work item was doing, at what time.
For example, with no extra nodes in the scene it took ~0.4 seconds:
[10:05:27.352] Loading .hip file '/home/taylor/temp/test.hip'...
[10:05:27.769] .hip file done loading
And with all of the Erode nodes loading the .hip took ~0.8 seconds, which accounts for the difference in cook time as well:
[10:04:34.827] Loading .hip file '/home/taylor/temp/test.hip'...
[10:04:35.628] .hip file done loading
You can mitigate this by turning on batching on the ROP Geometry, which will cook multiple work items in the same process. Or use services to create long-running processes that load the .hip once, and evaluate multiple tasks on the service process over the course of the graph cook: https://www.sidefx.com/docs/houdini/tops/services.html [www.sidefx.com]
Edited by tpetrick - Aug. 18, 2022 10:12:23
PDG/TOPs » why topnet cook time increase as number of complex node?
- tpetrick
- 596 posts
- Offline
Can you please attach an example .hip file that demonstrates the issue you're seeing? What nodes are actually inside of your TOP net?
PDG/TOPs » Controlling the names of groups/pieces returned from TopGeom
- tpetrick
- 596 posts
- Offline
The name of the group names/piece attribute values are named off the work item that the geometry came from. The number is part of the name -- it's the work item's unique ID.
In current builds there isn't a way to configure that, but I've added in an extra field to the node that can be used to set a custom name. Those changes are availble in tomorrow's daily builds of H19.5 and H19.0. You can use work item @attribs in the custom name like any other expression on a TOP node. For example, the current default for the group name field is `@pdg_name`. This results in the same behavior as before, but it's now expressed via the parameter instead of hardcoded.
In current builds there isn't a way to configure that, but I've added in an extra field to the node that can be used to set a custom name. Those changes are availble in tomorrow's daily builds of H19.5 and H19.0. You can use work item @attribs in the custom name like any other expression on a TOP node. For example, the current default for the group name field is `@pdg_name`. This results in the same behavior as before, but it's now expressed via the parameter instead of hardcoded.
PDG/TOPs » Houdini 19.5 and Deadline Scheduler
- tpetrick
- 596 posts
- Offline
This was a bug on our end. I checked in a fix for it yesterday evening, so it should be fixed in today's daily builds.
PDG/TOPs » PDG Work Items generating locally, not remotely
- tpetrick
- 596 posts
- Offline
Those logs are printed to the standard output of the shell that launched the Houdini process, not to a file on disk. They also need to be set in the environment when the Houdini process starts up.
Edited by tpetrick - Aug. 9, 2022 14:34:00
PDG/TOPs » PDG Work Items generating locally, not remotely
- tpetrick
- 596 posts
- Offline
Yep, it's documented in the list of env vars: https://www.sidefx.com/docs/houdini/ref/env#houdini_pdg_node_debug [www.sidefx.com]
There are a number of PDG-specific debug switches that can be enabled -- they should all be described on that page as well.
There are a number of PDG-specific debug switches that can be enabled -- they should all be described on that page as well.
PDG/TOPs » PDG Work Items generating locally, not remotely
- tpetrick
- 596 posts
- Offline
If you're using the same graph and work items aren't being generated, it sounds like one of the nodes in your graph has errors. If you're cooking via a script in a headless session, the easiest way to check for that is to set HOUDINI_PDG_NODE_DEBUG=2 in your process environment, which will enable print outs for any node errors/warnings, as well as node cook status messages. For example, in a simple graph I created with an expression error in the first node:
Without seeing your .hip file, it's hard to provide any additional suggestions.
[13:07:12] PDG: STATUS NODE ERROR (genericgenerator1)
Unable to evaluate expression (Expression stack error (/obj/topnet1/genericgenerator1/itemcount)).
[13:07:12] PDG: STATUS NODE GENERATED (genericgenerator1)
[13:07:12] PDG: STATUS NODE COOKED (genericgenerator1)
Without seeing your .hip file, it's hard to provide any additional suggestions.
Edited by tpetrick - Aug. 9, 2022 13:10:53
PDG/TOPs » hython TOPs progress reporting . logs etc
- tpetrick
- 596 posts
- Offline
It certainly could -- that's just not how it works currently.
I've logged an RFE to provide a work item API method that queries the full log data via the work item's scheduler. It'll likely have to be exposed as new method when it gets added, since the pdg.WorkItem.logMessages property is directly bound to the in-process log buffer.
I've logged an RFE to provide a work item API method that queries the full log data via the work item's scheduler. It'll likely have to be exposed as new method when it gets added, since the pdg.WorkItem.logMessages property is directly bound to the in-process log buffer.
PDG/TOPs » hython TOPs progress reporting . logs etc
- tpetrick
- 596 posts
- Offline
pdg.WorkItem.logMessages is just a string buffer that in-process work items can use to write log message, since an in-process work item doesn't have an external file to write to (and we don't want to create one).
For work items that cook out of process, the log is managed by the farm system. In order to make it show up in pdg.WorkItem.logMessages PDG would need to download the log data from the farm system for each work item as it finishes. This could end up being expensive and result in a lot of extra RPC/network calls to the farm system for something that may not even be used. In the UI the work item MMB panel downloads the logs as necessary when you click on work items -- only log data that's actually requested by the user is fetched.
For example, on Deadline the logs are archived and we actually have to submit a new Deadline task to query the log data for a completed job.
For work items that cook out of process, the log is managed by the farm system. In order to make it show up in pdg.WorkItem.logMessages PDG would need to download the log data from the farm system for each work item as it finishes. This could end up being expensive and result in a lot of extra RPC/network calls to the farm system for something that may not even be used. In the UI the work item MMB panel downloads the logs as necessary when you click on work items -- only log data that's actually requested by the user is fetched.
For example, on Deadline the logs are archived and we actually have to submit a new Deadline task to query the log data for a completed job.
PDG/TOPs » hython TOPs progress reporting . logs etc
- tpetrick
- 596 posts
- Offline
pdg.WorkItem.logMessages only exists in H19.0 and newer, and will only have log data for work items that cooked in-process. The log files for any work items that run out of process are always managed by the scheduler, and will be a file on disk or a URL depending on which scheduler is being used.
Edited by tpetrick - July 14, 2022 18:20:09
PDG/TOPs » hython TOPs progress reporting . logs etc
- tpetrick
- 596 posts
- Offline
For a work item that cooks in-process, you can access it's internal log buffer using the work_item.logMessages property (https://www.sidefx.com/docs/houdini/tops/pdg/WorkItem.html#logMessages)
For work items that cook out of process, the Scheduler provide an API method to query the URI to the log. The local scheduler stores the log files in the PDG_TEMP dir on disk -- farm schedulers store it somewhere on the farm itself, depending on which scheduler you're using: https://www.sidefx.com/docs/houdini/tops/pdg/Scheduler#getLogURI [www.sidefx.com]
For work items that cook out of process, the Scheduler provide an API method to query the URI to the log. The local scheduler stores the log files in the PDG_TEMP dir on disk -- farm schedulers store it somewhere on the farm itself, depending on which scheduler you're using: https://www.sidefx.com/docs/houdini/tops/pdg/Scheduler#getLogURI [www.sidefx.com]
PDG/TOPs » multi GPU OPENGL ROP
- tpetrick
- 596 posts
- Offline
The Deadline parameters for setting GPU affinity only apply to Open CL jobs, and when rendering ROPs that expose a way to configure which GPU device to use (such as the Redshift ROP). The final GPU selection is exported to the $HOUDINI_OCL_DEVICENUMBER variable when the job is assigned to a machine. That parm won't have any affect on OpenGL jobs since the ROP itself doesn't have a way to configure which GPU to use.
PDG/TOPs » Create intermediate directory & Overwrite existing .usd file
- tpetrick
- 596 posts
- Offline
Alright, the USD Render issue been fixed in the next daily build of H19.0. There's a new toggle (which defaults to on) that ensures that intermediate directories are created for the output image path.
PDG/TOPs » Create intermediate directory & Overwrite existing .usd file
- tpetrick
- 596 posts
- Offline
ikoon
ROP USD Output TOP node - does NOT overwrite an existing .usd file (after I "Dirty and Cook This Node")
This is because the file already exists on disk. PDG processor nodes typically have an option to enable caching of output files -- by default, if the file exists the work item cooks from cache on the next cook. If you Ctrl+MMB on a work item dot, you'll see the status is set to "Cooked from Cache" instead of "Cooked". The different cache mode options are documented here: https://www.sidefx.com/docs/houdini/nodes/top/ropfetch.html#pdg_cachemode [www.sidefx.com]
You can change the cache mode parameter to Write Files so that the node will always write outputs, and never cook from cache. Normally, Cache files are invalidated if the relevant parts of the scene are changed, if they're manually deleted on disk, or if an upstream dependency invalidates it cache files.
ikoon
USD Render TOP node - does NOT create missing intermediate directories (after I "Dirty and Cook This Node")
This is likely just an oversight. We can expose that as an option on the node.
-
- Quick Links