Yep, it's documented in the list of env vars: https://www.sidefx.com/docs/houdini/ref/env#houdini_pdg_node_debug [www.sidefx.com]
There are a number of PDG-specific debug switches that can be enabled -- they should all be described on that page as well.
Found 434 posts.
Search results Show results as topic list.
PDG/TOPs » PDG Work Items generating locally, not remotely
- tpetrick
- 578 posts
- Offline
PDG/TOPs » PDG Work Items generating locally, not remotely
- tpetrick
- 578 posts
- Offline
If you're using the same graph and work items aren't being generated, it sounds like one of the nodes in your graph has errors. If you're cooking via a script in a headless session, the easiest way to check for that is to set HOUDINI_PDG_NODE_DEBUG=2 in your process environment, which will enable print outs for any node errors/warnings, as well as node cook status messages. For example, in a simple graph I created with an expression error in the first node:
Without seeing your .hip file, it's hard to provide any additional suggestions.
[13:07:12] PDG: STATUS NODE ERROR (genericgenerator1)
Unable to evaluate expression (Expression stack error (/obj/topnet1/genericgenerator1/itemcount)).
[13:07:12] PDG: STATUS NODE GENERATED (genericgenerator1)
[13:07:12] PDG: STATUS NODE COOKED (genericgenerator1)
Without seeing your .hip file, it's hard to provide any additional suggestions.
Edited by tpetrick - 2022年8月9日 13:10:53
PDG/TOPs » hython TOPs progress reporting . logs etc
- tpetrick
- 578 posts
- Offline
It certainly could -- that's just not how it works currently.
I've logged an RFE to provide a work item API method that queries the full log data via the work item's scheduler. It'll likely have to be exposed as new method when it gets added, since the pdg.WorkItem.logMessages property is directly bound to the in-process log buffer.
I've logged an RFE to provide a work item API method that queries the full log data via the work item's scheduler. It'll likely have to be exposed as new method when it gets added, since the pdg.WorkItem.logMessages property is directly bound to the in-process log buffer.
PDG/TOPs » hython TOPs progress reporting . logs etc
- tpetrick
- 578 posts
- Offline
pdg.WorkItem.logMessages is just a string buffer that in-process work items can use to write log message, since an in-process work item doesn't have an external file to write to (and we don't want to create one).
For work items that cook out of process, the log is managed by the farm system. In order to make it show up in pdg.WorkItem.logMessages PDG would need to download the log data from the farm system for each work item as it finishes. This could end up being expensive and result in a lot of extra RPC/network calls to the farm system for something that may not even be used. In the UI the work item MMB panel downloads the logs as necessary when you click on work items -- only log data that's actually requested by the user is fetched.
For example, on Deadline the logs are archived and we actually have to submit a new Deadline task to query the log data for a completed job.
For work items that cook out of process, the log is managed by the farm system. In order to make it show up in pdg.WorkItem.logMessages PDG would need to download the log data from the farm system for each work item as it finishes. This could end up being expensive and result in a lot of extra RPC/network calls to the farm system for something that may not even be used. In the UI the work item MMB panel downloads the logs as necessary when you click on work items -- only log data that's actually requested by the user is fetched.
For example, on Deadline the logs are archived and we actually have to submit a new Deadline task to query the log data for a completed job.
PDG/TOPs » hython TOPs progress reporting . logs etc
- tpetrick
- 578 posts
- Offline
pdg.WorkItem.logMessages only exists in H19.0 and newer, and will only have log data for work items that cooked in-process. The log files for any work items that run out of process are always managed by the scheduler, and will be a file on disk or a URL depending on which scheduler is being used.
Edited by tpetrick - 2022年7月14日 18:20:09
PDG/TOPs » hython TOPs progress reporting . logs etc
- tpetrick
- 578 posts
- Offline
For a work item that cooks in-process, you can access it's internal log buffer using the work_item.logMessages property (https://www.sidefx.com/docs/houdini/tops/pdg/WorkItem.html#logMessages)
For work items that cook out of process, the Scheduler provide an API method to query the URI to the log. The local scheduler stores the log files in the PDG_TEMP dir on disk -- farm schedulers store it somewhere on the farm itself, depending on which scheduler you're using: https://www.sidefx.com/docs/houdini/tops/pdg/Scheduler#getLogURI [www.sidefx.com]
For work items that cook out of process, the Scheduler provide an API method to query the URI to the log. The local scheduler stores the log files in the PDG_TEMP dir on disk -- farm schedulers store it somewhere on the farm itself, depending on which scheduler you're using: https://www.sidefx.com/docs/houdini/tops/pdg/Scheduler#getLogURI [www.sidefx.com]
PDG/TOPs » multi GPU OPENGL ROP
- tpetrick
- 578 posts
- Offline
The Deadline parameters for setting GPU affinity only apply to Open CL jobs, and when rendering ROPs that expose a way to configure which GPU device to use (such as the Redshift ROP). The final GPU selection is exported to the $HOUDINI_OCL_DEVICENUMBER variable when the job is assigned to a machine. That parm won't have any affect on OpenGL jobs since the ROP itself doesn't have a way to configure which GPU to use.
PDG/TOPs » Create intermediate directory & Overwrite existing .usd file
- tpetrick
- 578 posts
- Offline
Alright, the USD Render issue been fixed in the next daily build of H19.0. There's a new toggle (which defaults to on) that ensures that intermediate directories are created for the output image path.
PDG/TOPs » Create intermediate directory & Overwrite existing .usd file
- tpetrick
- 578 posts
- Offline
ikoon
ROP USD Output TOP node - does NOT overwrite an existing .usd file (after I "Dirty and Cook This Node")
This is because the file already exists on disk. PDG processor nodes typically have an option to enable caching of output files -- by default, if the file exists the work item cooks from cache on the next cook. If you Ctrl+MMB on a work item dot, you'll see the status is set to "Cooked from Cache" instead of "Cooked". The different cache mode options are documented here: https://www.sidefx.com/docs/houdini/nodes/top/ropfetch.html#pdg_cachemode [www.sidefx.com]
You can change the cache mode parameter to Write Files so that the node will always write outputs, and never cook from cache. Normally, Cache files are invalidated if the relevant parts of the scene are changed, if they're manually deleted on disk, or if an upstream dependency invalidates it cache files.
ikoon
USD Render TOP node - does NOT create missing intermediate directories (after I "Dirty and Cook This Node")
This is likely just an oversight. We can expose that as an option on the node.
PDG/TOPs » some errors on the ROP mantra node
- tpetrick
- 578 posts
- Offline
Your scene is set up such that the File SOPs attempt to load `@pdg_output`. That variables resolves to the output of the active task -- if you want to render the input geometry file, you should instead set it to `@pdg_input`. Right now, when the Mantra work items render the File SOP is trying to load the output .exr file instead of the input geometry from the upstream work item.
Edited by tpetrick - 2022年5月31日 12:36:42
PDG/TOPs » TOPS render node not writing to disk when in a locked HDA
- tpetrick
- 578 posts
- Offline
Please attach an example .hip file demonstrating the issue, preferably by logging a bug or question with support.
PDG/TOPs » HDA Processor, What is the ideal workflow for nested HDA's?
- tpetrick
- 578 posts
- Offline
There were some changes to the HDA Processor for a different RFE that may address your issue with the operator type. It's now possible to set it with an expression, since it's no longer bound to the .hda file path. Those changes will be live in tomorrow's daily build of H19.0.
For the other items, please log an RFE or Bug with an example .hip/.hda file that demonstrates the issues at hand.
For the other items, please log an RFE or Bug with an example .hip/.hda file that demonstrates the issues at hand.
Edited by tpetrick - 2022年5月20日 13:28:05
PDG/TOPs » Filter by Expression/Range to delete wedge nodes at Random
- tpetrick
- 578 posts
- Offline
It's pretty hard to say what the issue is without actually seeing the .hip file.
If you want to delete exactly 200 work items, then that approach probably won't work. The filter expression is evaluated independently for each work item, which means there's no shared state and no way for you to ensure that an exact number are deleted. You're better of using a Python Processor TOP, which has access to the full list of input work items. Something like:
If you want to delete exactly 200 work items, then that approach probably won't work. The filter expression is evaluated independently for each work item, which means there's no shared state and no way for you to ensure that an exact number are deleted. You're better of using a Python Processor TOP, which has access to the full list of input work items. Something like:
import random choices = random.sample(range(0, len(upstream_items)), k=200) for index, upstream_item in enumerate(upstream_items): if index not in choices: item_holder.addWorkItem(parent=upstream_item)
PDG/TOPs » Filter by Expression/Range to delete wedge nodes at Random
- tpetrick
- 578 posts
- Offline
You can't mix HScript expression functions/variables like @pdg_frame and Python in the same expression. If you're using a Python expression, you'll need to use pdg.workItem() to access the work item that the parameter is being evaluated against. For example, pdg.workItem().frame in a Python expression is the same as @pdg_frame in an HScript expression.
pdg.workItem() will return a pdg.WorkItem instance that you can use to access attributes and intrinsic data: https://www.sidefx.com/docs/houdini/tops/pdg/WorkItem.html [www.sidefx.com]
pdg.workItem() will return a pdg.WorkItem instance that you can use to access attributes and intrinsic data: https://www.sidefx.com/docs/houdini/tops/pdg/WorkItem.html [www.sidefx.com]
Edited by tpetrick - 2022年5月17日 12:10:09
PDG/TOPs » Flipbook wedge work items as sequence
- tpetrick
- 578 posts
- Offline
The ROP fetch's batching feature only works over a range of frames -- each batch will render the ROP by calling the ROP's render method over the full range specified on the node, instead of just for a single frame. If you're only rendering frame 1 for each work item, it's currently not possible to batch the work.
You could instead use Services though, which are a more general feature that creates a fixed pool of worker processes that get reused between work items: https://www.sidefx.com/docs/houdini/tops/services.html [www.sidefx.com]
You could instead use Services though, which are a more general feature that creates a fixed pool of worker processes that get reused between work items: https://www.sidefx.com/docs/houdini/tops/services.html [www.sidefx.com]
Edited by tpetrick - 2022年5月17日 11:25:28
PDG/TOPs » Ropfetch cooks workitems twice rewriting their results.
- tpetrick
- 578 posts
- Offline
Please attach a .hip file that demonstrates the issue -- its hard to tell what's going on without one.
PDG/TOPs » Bulk processing fbx models and PDG FBX Export
- tpetrick
- 578 posts
- Offline
The ROP FBX Output TOP creates work items that cook an FBX ROP -- the docs for that node apply to the ROP FBX Output as well, can can be found here: https://www.sidefx.com/docs/houdini/nodes/out/filmboxfbx.html [www.sidefx.com]
The Export parameter is used to specify which Object nodes in your Houdini scene should be exported as FBX files. If you want to re-export the .bgeo.sc files produced by the HDA Processor, you'll probably need to create a File node in /obj that loads in the correpsonding .bgeo.sc file.
It's a bit hard to tell what your network is doing though without actually seeing the .hip file. Note that you can also Ctrl+MMB on the failed work items that are visible in your screenshot, to see a more detailed log about what the work item was doing/why it failed.
The Export parameter is used to specify which Object nodes in your Houdini scene should be exported as FBX files. If you want to re-export the .bgeo.sc files produced by the HDA Processor, you'll probably need to create a File node in /obj that loads in the correpsonding .bgeo.sc file.
It's a bit hard to tell what your network is doing though without actually seeing the .hip file. Note that you can also Ctrl+MMB on the failed work items that are visible in your screenshot, to see a more detailed log about what the work item was doing/why it failed.
PDG/TOPs » cook time attribute?
- tpetrick
- 578 posts
- Offline
You can access the cook time of a work item using the Python API: https://www.sidefx.com/docs/houdini/tops/pdg/WorkItem.html#cookDuration [www.sidefx.com]
For example, you could use a Python Script TOP to store to an attribute: work_item.setFloatAttrib("cooktime", parent_item.cookDuration)
For example, you could use a Python Script TOP to store to an attribute: work_item.setFloatAttrib("cooktime", parent_item.cookDuration)
PDG/TOPs » Task Graph Table showing work items from too many nodes
- tpetrick
- 578 posts
- Offline
Both of these sound like bugs. Please log them with support, including steps to reproduce and an example .hip file.
Edited by tpetrick - 2022年5月11日 11:00:05
PDG/TOPs » array attribute to use in LOP for loop
- tpetrick
- 578 posts
- Offline
Using @attrib is the same as using @attrib.0 -- it acceses the first value in the attribute. You'll probably need to use one of the PDG expression functions instead of the @shorthand form.
For example, you can use pdgattrib(..) to read an attribute value at specific index, such as pdgattrib("variant", 2) to access the variant attrib at index 2. If you're accessing a string attrib you'll need to use pdgattribs instead, with an "s" on the end. Since the for each LOP creates an index variable for you, your nodes inside the loop can use pdgattrib("attribname", @ITERATION) to access an attribute for the current iteration.
Alternatiely, you could also use pdgatribvals("variant"), which returns a space-separate string containing all values in the PDG attribute with that name (https://www.sidefx.com/docs/houdini/expressions/pdgattribvals.html). That should be the correct format for the Iterate Over Strings parm on the for each LOP.
For example, you can use pdgattrib(..) to read an attribute value at specific index, such as pdgattrib("variant", 2) to access the variant attrib at index 2. If you're accessing a string attrib you'll need to use pdgattribs instead, with an "s" on the end. Since the for each LOP creates an index variable for you, your nodes inside the loop can use pdgattrib("attribname", @ITERATION) to access an attribute for the current iteration.
Alternatiely, you could also use pdgatribvals("variant"), which returns a space-separate string containing all values in the PDG attribute with that name (https://www.sidefx.com/docs/houdini/expressions/pdgattribvals.html). That should be the correct format for the Iterate Over Strings parm on the for each LOP.
-
- Quick Links