In your original .hip file, the problem with batching for the OpenGL ROP is that the input work items all have the frame range (1, 1, 1). In order for the second ROP Fetch to generate a batch, it needs to know what the frame range boundaries are on the input work items. What happens now is that it assumes that each input item is from a distinct sequence, and therefore ends up creating 10 batches each with a single frame and a range of (1,1,1). Batching itself only works over a range of frames, because a "batch" is just a work item that calls rop.render(...) over with a range instead of a single frame.
One way to make the example file work is to enable batching on your ROP Geometry, and enable the "Automatically Set Missing Frames" toggle. Since the incoming wedges have no frame information, the ROP Geometry will fallback to assuming the desire frame range is (1, num_items, 1). You'll then end up with 10 work items with frames ranging from 1 to 10. Enabling batching on the downstream ROP Fetch will then just work because the incoming items have correct frames/ranges set on them.
If you want to avoid modifying the ROP Geometry node, you can also drop an Attribute Create in between the ROP Geometry and ROP Fetch, and use it to manually update the "range" attribute to (1,10,1). That way that the ROP Fetch will once again treat the input work items as a sequence of 10 frames, instead of 10 sequences of 1 frame each. You can enable batching on the ROP Fetch without any other changes to that node.
A third option is to put a Wait for All after the Rop Geometry and then wire the second ROP Fetch into that with the frame range start/ends parms set to 1, @partitionsize and batching enabled.
In general, the ROP Fetch and in particular batching is very tied to frame ranges. There's an existing RFE to add more options for batching, e.g. by attribute or by fixed numbers of work items. In that case, the script cooking the ROP would have to manually run a loop and call rop.render(frame=1) for each entry in the batch, instead of invoking the ROP once over a frame range. That would also solve this issue -- we can look into adding those controls over batching sooner rather than later.
Found 439 posts.
Search results Show results as topic list.
PDG/TOPs » Q: Running OpenGL ROP work items (not frames) single batch
- tpetrick
- 586 posts
- Offline
PDG/TOPs » Generate work items based on String Array?
- tpetrick
- 586 posts
- Offline
It's hard to tell without seeing an actual .hip file, but my guess is your network is creating that attribute in a Python Script that runs when the work item cooks, instead of when the work item generates. If an attribute is added when a work item cooks, downstream nodes cannot access it when they generate because it doesn't exist yet. Generation happens before anything cooks.
You can see that by doing an RMB -> Generate Node on the TOP node instead of cooking it.
You can see that by doing an RMB -> Generate Node on the TOP node instead of cooking it.
Edited by tpetrick - 2021年12月2日 14:46:28
PDG/TOPs » Generate work items based on String Array?
- tpetrick
- 586 posts
- Offline
The Work Item Expand node can be configured to create a work item from each entry in an array attribute. It'll set the index of the work item to match the index of each value in the array as well. With the Python Processor, you need to set the index yourself by passing an index=<something> argument to the call to addWorkItem.
PDG/TOPs » PDG: Rop Fetch doesn't generate mesh correctly
- tpetrick
- 586 posts
- Offline
Are you able to attach an example .hip file that reproduces the issue? My guess would be that something may be wrong with the way the .json file is being loaded and the geometry is in fact empty, but only went cooked via the TOP network. It's hard to say without being able to cook the file though.
PDG/TOPs » PDG error with CSV input
- tpetrick
- 586 posts
- Offline
That bug was fixed in H18.5.476. From your screenshots it looks like you're using an older version of Houdini than that, so you'll need to update in order to resolve the issue.
PDG/TOPs » PDG: Python to change the OBJ level with TOPs makes sense?
- tpetrick
- 586 posts
- Offline
It was just an example I included to show how you can report an output file back to TOPs, which will show up on the work item's attribute panel. Result data is an older term for the output files listed on the work item, which can you can view by pressing the middle-mouse button on that work item in the UI.
PDG/TOPs » PDG: Python to change the OBJ level with TOPs makes sense?
- tpetrick
- 586 posts
- Offline
The Python Script TOP has an option to evaluate either in process or out of process. When it's set to out of process, it creates work items that run using Hython by default, but can also be configured to use plain Python or a custom Python executable that doesn't even match the Python version of Houdini. Your script could do something like:
The script has access to a work_item variable that it can use to report output files and attributes back to the PDG graph:
The difference between a Python Script and Python Processor is that the latter is used to generate a custom number of work items, while the former always generates one work item per input and runs a script when that work item cooks. They have same restrictions w/ regard to editing the scene file -- in process work items can't edit the current scene.
import hou hou.hipFile.load("/path/to/base.hip") hou.node("/obj").loadItemsFromFile("some/items) # .. etc ... hou.hipFile.save()
The script has access to a work_item variable that it can use to report output files and attributes back to the PDG graph:
work_item.addResultData("/path/to/base.hip","file/hip", 0) work_item.setIntAttrib("some_int_attr", [1,2,3])
The difference between a Python Script and Python Processor is that the latter is used to generate a custom number of work items, while the former always generates one work item per input and runs a script when that work item cooks. They have same restrictions w/ regard to editing the scene file -- in process work items can't edit the current scene.
PDG/TOPs » PDG / RopFetch does not want to cook!!!
- tpetrick
- 586 posts
- Offline
The error messages are stored on the work items. You can use Ctrl+middle mouse button on a work item to open the work item attribute panel, which also includes the log output for that work item.
PDG/TOPs » need help with wedge
- tpetrick
- 586 posts
- Offline
In the file you attached, the Render Product node's file path is set to $HIP/render/`@test`/light`@lighting`.exr. However there doesn't appear to be an @test attribute on your work items -- you can middle-mouse click on the items to see the full list of attributes.
It seems like something in your file has created a geo_name context option -- that's where the value "2" is coming from. In the Context Options editor there's a menu-type context option with the same name, which will take priority over PDG attributes. You can use a P prefix on the attribute access in order to force it to use PDG attribute even if a context option exists, for example $HIP/render/`P@geo_name`/light`@lighting`.exr for the file path works for me.
It seems like something in your file has created a geo_name context option -- that's where the value "2" is coming from. In the Context Options editor there's a menu-type context option with the same name, which will take priority over PDG attributes. You can use a P prefix on the attribute access in order to force it to use PDG attribute even if a context option exists, for example $HIP/render/`P@geo_name`/light`@lighting`.exr for the file path works for me.
Edited by tpetrick - 2021年11月23日 11:12:44
PDG/TOPs » Load different assets, inject data, and process
- tpetrick
- 586 posts
- Offline
1. Yes, each server instance is a new Hython process.
2. Other nodes have no way to run on the Houdini server instance right now -- only the script code from the Send Command TOP can run with on the server. The ROP Fetch will behave the same as normal when it's inside a server block.
2. Other nodes have no way to run on the Houdini server instance right now -- only the script code from the Send Command TOP can run with on the server. The ROP Fetch will behave the same as normal when it's inside a server block.
Edited by tpetrick - 2021年11月23日 10:55:19
PDG/TOPs » PDG: Python to change the OBJ level with TOPs makes sense?
- tpetrick
- 586 posts
- Offline
The Python Script TOP cannot be used to edit the current scene file, since it runs on a background thread. You'll need to either run the Python Script out of process and have it operate on its own copy of the scene, or using a Houdini Command Chain block to create persistent out of process Houdini sessions that you can run multiple scripts with.
PDG/TOPs » "Delete File Outputs From Disk" deletes more than output
- tpetrick
- 586 posts
- Offline
That bug is also fixed in the next daily builds -- your original file will work as intended, without accidentally including the File COP as an output.
PDG/TOPs » Function/Class that moves dependencies into working dir
- tpetrick
- 586 posts
- Offline
Starting with H19.0, custom file transfer logic can be written as a standalone handler instead of in the scheduler: https://www.sidefx.com/docs/houdini/tops/custom_tags.html#custfiletransferhandlers [www.sidefx.com]
This is so that the custom logic can be used with built-in TOP schedulers, instead of needing a whole custom scheduler just to implement the file transfer logic. The old approach of supplying a custom tranferFile method on the scheduler should still work the same as before, however. The scheduler.transferFiles(..) method will call into the custom handlers if they exist, or fallback to the scheduler logic if it exists, and the finally use the built-in defaults if no customizations are applied.
What's the exact issue you're running into? Are you using a custom scheduler, or a built-in scheduler? If it's a custom scheduler, that sounds like a bug on our end with it not properly calling the scheduler method.
There's also a debug variable you can set to get verbose output about file transfers: https://www.sidefx.com/docs/houdini/ref/env.html#houdini_pdg_transfer_debug [www.sidefx.com]
Setting HOUDINI_PDG_TRANSFER_DEBUG=4 in the environment prints the maximum debug information.
This is so that the custom logic can be used with built-in TOP schedulers, instead of needing a whole custom scheduler just to implement the file transfer logic. The old approach of supplying a custom tranferFile method on the scheduler should still work the same as before, however. The scheduler.transferFiles(..) method will call into the custom handlers if they exist, or fallback to the scheduler logic if it exists, and the finally use the built-in defaults if no customizations are applied.
What's the exact issue you're running into? Are you using a custom scheduler, or a built-in scheduler? If it's a custom scheduler, that sounds like a bug on our end with it not properly calling the scheduler method.
There's also a debug variable you can set to get verbose output about file transfers: https://www.sidefx.com/docs/houdini/ref/env.html#houdini_pdg_transfer_debug [www.sidefx.com]
Setting HOUDINI_PDG_TRANSFER_DEBUG=4 in the environment prints the maximum debug information.
Edited by tpetrick - 2021年11月2日 16:11:19
PDG/TOPs » PDG Feedback Loop to create Nested Wedging?
- tpetrick
- 586 posts
- Offline
You can't directly access previously created work items. The feedback loop gives you access to the list of output file(s) from the previous loop iteration, or any attribute values that have feedback enabled via the parm on the begin node's parameter interface. Those values are copied from the partition in the end block onto the corresponding work item in the begin block for the next iteration. The topology of the work items in the previous iteration is not accessible, though, so it's not possible to use a feedback loop to re-create a linear of chain of wedge nodes.
PDG/TOPs » Loading a Houdini File via TOPs
- tpetrick
- 586 posts
- Offline
You can't load a .hip file into the same process that's actively cooking nodes. For the Python Script TOP, you'll need to have the Python Script running out of process in order safely load the file by setting the "Evaluate Script During" parameter to "Cook (Out of Process)".
PDG/TOPs » Bake texture ROP - different resolution UDIMS
- tpetrick
- 586 posts
- Offline
Keep in mind that work items in the ROP Fetch TOP run out of process. The ROP cook occurs as part of a script run using a separate instance of Hython that loads the .hip, cooks the ROP and reports output files. Depending on your platform and environment it can take a few seconds to spawn that process, so if your ROP only takes a few seconds that a large percentage of time will be spent just starting the child process.
Additionally, in the file you attached the geometry import TOP isn't actually doing anything useful. It's writing a SOP's geometry to disk, however nothing in the scene is loading back that cache file. That means when the ROP Fetch work item cooks the bake texture ROP, it'll have to cook the target geometry again anyways. There should probably be a file cache or file SOP somewhere that's loading in `@pdg_input` in order for that to work properly.
Using the same .hip file, I added a file SOP to load in the cached geometry and configured the bake texture to use it, then cooked it with same ROP Fetch node. For a fair comparison, I also used the Render to Disk in Background button on the underlying ROP so that it cooks out of process in the same way as TOPs. The resulting cook times are basically the same:
Additionally, in the file you attached the geometry import TOP isn't actually doing anything useful. It's writing a SOP's geometry to disk, however nothing in the scene is loading back that cache file. That means when the ROP Fetch work item cooks the bake texture ROP, it'll have to cook the target geometry again anyways. There should probably be a file cache or file SOP somewhere that's loading in `@pdg_input` in order for that to work properly.
Using the same .hip file, I added a file SOP to load in the cached geometry and configured the bake texture to use it, then cooked it with same ROP Fetch node. For a fair comparison, I also used the Render to Disk in Background button on the underlying ROP so that it cooks out of process in the same way as TOPs. The resulting cook times are basically the same:
Edited by tpetrick - 2021年10月1日 16:50:44
PDG/TOPs » Add more wedge count during pdg cook ?
- tpetrick
- 586 posts
- Offline
That's currently not possible. Once the network begins to cook, external Python code can't make those kind of modification to the graph. It is possible to add work items to a node using the Python API (https://www.sidefx.com/docs/houdini/tops/pdg/Processor.html#injectStaticItems), however that can only be done when the graph is not cooking. Any requests to insert work items into a node that is part of an active cook will be queued up and processed once the cook finishes.
PDG/TOPs » Can't use attr created with Python TOP in ffmpegencodevideo?
- tpetrick
- 586 posts
- Offline
Your Python Script node is set to evaluate when the work item cooks, which means that it's creating the attribute too late. The Output File Path parameter is evaluated when the FFMpeg node generates, so the attribute does not exist yet on the input work item. The data dependency is supposed to be detect automatically, but it seems like there's a bug. In the mean time, you can fix it by setting the "Evaluate Script During" parameter on the Python Script to "Generate" instead of "Cook".
The Python Script node also has "Copy Input to Outputs" disabled by default -- in your case you'll need to enable, to preserve the input image file list for the FFMpeg node.
The Python Script node also has "Copy Input to Outputs" disabled by default -- in your case you'll need to enable, to preserve the input image file list for the FFMpeg node.
Edited by tpetrick - 2021年9月13日 10:59:17
PDG/TOPs » PDG Partition Node Dirtying Mode
- tpetrick
- 586 posts
- Offline
That option existed in H17.5 because dynamic partitioners would dirty all their partitions any time upstream work items were changed. That's no longer the case in 18.5 onward, so the parameters no longer exist. Partitions are only dirtied if a) one of their dependencies is dirtied or b) the contents of the partition changes on a recook, e.g. because of changes to the partitioning settings or an upstream work item.
PDG/TOPs » Best way to write out metadata together with geo?
- tpetrick
- 586 posts
- Offline
I've attach a simple example that writes out a terrain geo and heightfield using COPs as part of the same work item. For illustrative purposes, I also included a Python SOP in the chain that prints frame number when it cooks. In the work item log there'll be a single print out for each frame, followed by PDG reporting both the geo and cop output for that frame. Each work item in the node ends up with two output files (one for each ROP in the chain). Batching is also enabled.
One limitation right now is that the "Output Parm Name" option on the ROP Fetch only accepts a single value. PDG uses that parameter to determine which parm on the target ROP node defines the output file path, so it can evaluate that path for cache checking and reporting cooked results. It has a list of parameter names that it knows about internally, but when using custom ROPs with their own output parm naming convention, it's necessary to explicitly inform PDG of the output parm name.
That's not an issue if you're using built-in ROPs since PDG knows about the output file parms on all of the standard ROP nodes, but if the ROP network consists of multiple, custom ROPs with unique output path parms, the work item likely won't be able to report all of the outputs properly. That's easy enough to fix on our end however, so that the parm can accept a space-separated list of output parms. I noticed in your .hip file that you're using the Labs CSV exporter so I think you'll need that fix. I should be able to get that in by early next week.
Regarding the question about cooking order -- ROPs can cook either frame by frame or node by node. The ROP Fetch TOP exposes a parameter to configure which cooking behavior is used when the work items evaluate the target ROP network. That setting only really matters if you're cooking a batch, since otherwise each work item will only cook one frame and both options will behave the same.
One limitation right now is that the "Output Parm Name" option on the ROP Fetch only accepts a single value. PDG uses that parameter to determine which parm on the target ROP node defines the output file path, so it can evaluate that path for cache checking and reporting cooked results. It has a list of parameter names that it knows about internally, but when using custom ROPs with their own output parm naming convention, it's necessary to explicitly inform PDG of the output parm name.
That's not an issue if you're using built-in ROPs since PDG knows about the output file parms on all of the standard ROP nodes, but if the ROP network consists of multiple, custom ROPs with unique output path parms, the work item likely won't be able to report all of the outputs properly. That's easy enough to fix on our end however, so that the parm can accept a space-separated list of output parms. I noticed in your .hip file that you're using the Labs CSV exporter so I think you'll need that fix. I should be able to get that in by early next week.
Regarding the question about cooking order -- ROPs can cook either frame by frame or node by node. The ROP Fetch TOP exposes a parameter to configure which cooking behavior is used when the work items evaluate the target ROP network. That setting only really matters if you're cooking a batch, since otherwise each work item will only cook one frame and both options will behave the same.
Edited by tpetrick - 2021年9月10日 12:06:12
-
- Quick Links