Found 445 posts.
Search results Show results as topic list.
PDG/TOPs » some errors on the ROP mantra node
- tpetrick
- 596 posts
- Offline
Your scene is set up such that the File SOPs attempt to load `@pdg_output`. That variables resolves to the output of the active task -- if you want to render the input geometry file, you should instead set it to `@pdg_input`. Right now, when the Mantra work items render the File SOP is trying to load the output .exr file instead of the input geometry from the upstream work item.
Edited by tpetrick - May 31, 2022 12:36:42
PDG/TOPs » TOPS render node not writing to disk when in a locked HDA
- tpetrick
- 596 posts
- Offline
Please attach an example .hip file demonstrating the issue, preferably by logging a bug or question with support.
PDG/TOPs » HDA Processor, What is the ideal workflow for nested HDA's?
- tpetrick
- 596 posts
- Offline
There were some changes to the HDA Processor for a different RFE that may address your issue with the operator type. It's now possible to set it with an expression, since it's no longer bound to the .hda file path. Those changes will be live in tomorrow's daily build of H19.0.
For the other items, please log an RFE or Bug with an example .hip/.hda file that demonstrates the issues at hand.
For the other items, please log an RFE or Bug with an example .hip/.hda file that demonstrates the issues at hand.
Edited by tpetrick - May 20, 2022 13:28:05
PDG/TOPs » Filter by Expression/Range to delete wedge nodes at Random
- tpetrick
- 596 posts
- Offline
It's pretty hard to say what the issue is without actually seeing the .hip file.
If you want to delete exactly 200 work items, then that approach probably won't work. The filter expression is evaluated independently for each work item, which means there's no shared state and no way for you to ensure that an exact number are deleted. You're better of using a Python Processor TOP, which has access to the full list of input work items. Something like:
If you want to delete exactly 200 work items, then that approach probably won't work. The filter expression is evaluated independently for each work item, which means there's no shared state and no way for you to ensure that an exact number are deleted. You're better of using a Python Processor TOP, which has access to the full list of input work items. Something like:
import random choices = random.sample(range(0, len(upstream_items)), k=200) for index, upstream_item in enumerate(upstream_items): if index not in choices: item_holder.addWorkItem(parent=upstream_item)
PDG/TOPs » Filter by Expression/Range to delete wedge nodes at Random
- tpetrick
- 596 posts
- Offline
You can't mix HScript expression functions/variables like @pdg_frame and Python in the same expression. If you're using a Python expression, you'll need to use pdg.workItem() to access the work item that the parameter is being evaluated against. For example, pdg.workItem().frame in a Python expression is the same as @pdg_frame in an HScript expression.
pdg.workItem() will return a pdg.WorkItem instance that you can use to access attributes and intrinsic data: https://www.sidefx.com/docs/houdini/tops/pdg/WorkItem.html [www.sidefx.com]
pdg.workItem() will return a pdg.WorkItem instance that you can use to access attributes and intrinsic data: https://www.sidefx.com/docs/houdini/tops/pdg/WorkItem.html [www.sidefx.com]
Edited by tpetrick - May 17, 2022 12:10:09
PDG/TOPs » Flipbook wedge work items as sequence
- tpetrick
- 596 posts
- Offline
The ROP fetch's batching feature only works over a range of frames -- each batch will render the ROP by calling the ROP's render method over the full range specified on the node, instead of just for a single frame. If you're only rendering frame 1 for each work item, it's currently not possible to batch the work.
You could instead use Services though, which are a more general feature that creates a fixed pool of worker processes that get reused between work items: https://www.sidefx.com/docs/houdini/tops/services.html [www.sidefx.com]
You could instead use Services though, which are a more general feature that creates a fixed pool of worker processes that get reused between work items: https://www.sidefx.com/docs/houdini/tops/services.html [www.sidefx.com]
Edited by tpetrick - May 17, 2022 11:25:28
PDG/TOPs » Ropfetch cooks workitems twice rewriting their results.
- tpetrick
- 596 posts
- Offline
Please attach a .hip file that demonstrates the issue -- its hard to tell what's going on without one.
PDG/TOPs » Bulk processing fbx models and PDG FBX Export
- tpetrick
- 596 posts
- Offline
The ROP FBX Output TOP creates work items that cook an FBX ROP -- the docs for that node apply to the ROP FBX Output as well, can can be found here: https://www.sidefx.com/docs/houdini/nodes/out/filmboxfbx.html [www.sidefx.com]
The Export parameter is used to specify which Object nodes in your Houdini scene should be exported as FBX files. If you want to re-export the .bgeo.sc files produced by the HDA Processor, you'll probably need to create a File node in /obj that loads in the correpsonding .bgeo.sc file.
It's a bit hard to tell what your network is doing though without actually seeing the .hip file. Note that you can also Ctrl+MMB on the failed work items that are visible in your screenshot, to see a more detailed log about what the work item was doing/why it failed.
The Export parameter is used to specify which Object nodes in your Houdini scene should be exported as FBX files. If you want to re-export the .bgeo.sc files produced by the HDA Processor, you'll probably need to create a File node in /obj that loads in the correpsonding .bgeo.sc file.
It's a bit hard to tell what your network is doing though without actually seeing the .hip file. Note that you can also Ctrl+MMB on the failed work items that are visible in your screenshot, to see a more detailed log about what the work item was doing/why it failed.
PDG/TOPs » cook time attribute?
- tpetrick
- 596 posts
- Offline
You can access the cook time of a work item using the Python API: https://www.sidefx.com/docs/houdini/tops/pdg/WorkItem.html#cookDuration [www.sidefx.com]
For example, you could use a Python Script TOP to store to an attribute: work_item.setFloatAttrib("cooktime", parent_item.cookDuration)
For example, you could use a Python Script TOP to store to an attribute: work_item.setFloatAttrib("cooktime", parent_item.cookDuration)
PDG/TOPs » Task Graph Table showing work items from too many nodes
- tpetrick
- 596 posts
- Offline
Both of these sound like bugs. Please log them with support, including steps to reproduce and an example .hip file.
Edited by tpetrick - May 11, 2022 11:00:05
PDG/TOPs » array attribute to use in LOP for loop
- tpetrick
- 596 posts
- Offline
Using @attrib is the same as using @attrib.0 -- it acceses the first value in the attribute. You'll probably need to use one of the PDG expression functions instead of the @shorthand form.
For example, you can use pdgattrib(..) to read an attribute value at specific index, such as pdgattrib("variant", 2) to access the variant attrib at index 2. If you're accessing a string attrib you'll need to use pdgattribs instead, with an "s" on the end. Since the for each LOP creates an index variable for you, your nodes inside the loop can use pdgattrib("attribname", @ITERATION) to access an attribute for the current iteration.
Alternatiely, you could also use pdgatribvals("variant"), which returns a space-separate string containing all values in the PDG attribute with that name (https://www.sidefx.com/docs/houdini/expressions/pdgattribvals.html). That should be the correct format for the Iterate Over Strings parm on the for each LOP.
For example, you can use pdgattrib(..) to read an attribute value at specific index, such as pdgattrib("variant", 2) to access the variant attrib at index 2. If you're accessing a string attrib you'll need to use pdgattribs instead, with an "s" on the end. Since the for each LOP creates an index variable for you, your nodes inside the loop can use pdgattrib("attribname", @ITERATION) to access an attribute for the current iteration.
Alternatiely, you could also use pdgatribvals("variant"), which returns a space-separate string containing all values in the PDG attribute with that name (https://www.sidefx.com/docs/houdini/expressions/pdgattribvals.html). That should be the correct format for the Iterate Over Strings parm on the for each LOP.
PDG/TOPs » ffmpeg -- how to improve gif export quality
- tpetrick
- 596 posts
- Offline
Editing the command line won't have any impact on the attributes on the node -- it'll just change the command line that gets executed for the workitem.
Normally the command updates automatically based on changes to the parm interface, but once you enable explicit editing of the FFMPEG command line that'll no longer be the case. There are two portions of the command that you'll need to keep intact to ensure that it works as expected -- the input file list and the output file path.
Input image files are specified using a frame list file, rather than directly listing the image files in the command. This is technically only required if the list of input images is longer than your platform's shell command length limit, however for consistency the frame list file is used in all cases. That file is specified as the input in the command line string using the following:
PDG writes the frame list file for you, so as long as you include that as your input to the ffmpeg executable it should find the images.
If you're using the "Convert" preset instead, e.g. to convert an existing file from .mp4 to .gif, then the input file is specified directly using an expression instead of a framelist file:
The output movie file is the last argument in the command line:
Once you enable manual editing of the command, changes to other parms on the node will no longer affect the command line string. You'll need to manually add/remove flags as needed.
Normally the command updates automatically based on changes to the parm interface, but once you enable explicit editing of the FFMPEG command line that'll no longer be the case. There are two portions of the command that you'll need to keep intact to ensure that it works as expected -- the input file list and the output file path.
Input image files are specified using a frame list file, rather than directly listing the image files in the command. This is technically only required if the list of input images is longer than your platform's shell command length limit, however for consistency the frame list file is used in all cases. That file is specified as the input in the command line string using the following:
-i "$PDG_TEMP/$HIPNAME.$OS.`@pdg_index`_framelist.txt"
PDG writes the frame list file for you, so as long as you include that as your input to the ffmpeg executable it should find the images.
If you're using the "Convert" preset instead, e.g. to convert an existing file from .mp4 to .gif, then the input file is specified directly using an expression instead of a framelist file:
-i "`pdginput(0, file/video, 0)`"
The output movie file is the last argument in the command line:
"$HIP/video/$HIPNAME.$OS.`@pdg_index`.mp4"
Once you enable manual editing of the command, changes to other parms on the node will no longer affect the command line string. You'll need to manually add/remove flags as needed.
Edited by tpetrick - May 4, 2022 14:03:06
PDG/TOPs » How to include wedge's parameter information in the output.
- tpetrick
- 596 posts
- Offline
If you want to include the values in the file name, you can use the @attrib syntax to access the wedge values. For example, $HIPNAME.$OS.`@noise`.`@roughness`.$F4.bgeo.sc, assuming your wedge is creating "noise" and "roughness" attributes. If you want to pad the values, for example to 4 digits, you can use padzero(4, @roughess) or @roughness:4. Finally, if the values have multiple components they can be access with their component suffix: $HIPNAME.$OS.`@roughness.x:4`.`@roughness.y:4`.bgeo.sc.
TOPs also has a Text Output node which can be used to write the values to a text file. The text field on that node can contain expressions as well -- just like any other string parm, it needs backticks. You could for example wire it after the ROP Geometry that's writing out .bgeo files, and use it to write out a corresponding .txt file with the wedge variations. For example, assuming the attributes exist:
TOPs also has a Text Output node which can be used to write the values to a text file. The text field on that node can contain expressions as well -- just like any other string parm, it needs backticks. You could for example wire it after the ROP Geometry that's writing out .bgeo files, and use it to write out a corresponding .txt file with the wedge variations. For example, assuming the attributes exist:
Wedge Parameters
Noise Type: `@noise`
Frequency: `@frequency`
Roughness: `@roughness`
Edited by tpetrick - April 22, 2022 13:11:25
PDG/TOPs » Geometry Import - Wedge Attribute
- tpetrick
- 596 posts
- Offline
There are a few issues with your file. The first issue is that your expression has a typo -- it uses @curveassset but it should be @curvesasset instead. The second problem is that you're pointing the "SOP Path" parameter to an Object node, that contains another Object node. The path should really be pointed to a SOP instead -- `@curvesasset`/geo1 for example works as expected.
PDG/TOPs » Python Scheduler's number of concurrent operations
- tpetrick
- 596 posts
- Offline
That's most likely because HDA Processor relies on a number of environment variables, such as $PDG_ITEM_ID which it uses to load the work item's JSON data file when running out of process. The example snippet I pasted is the bare minimum to run work items -- it doesn't set up the job environment for example, so it can't run work items that depend on that. You'd need to do something like the following when spawning the process:
job_env = os.environ.copy() job_env['PDG_RESULT_SERVER'] = str(self.workItemResultServerAddr()) job_env['PDG_ITEM_NAME'] = str(work_item.name) job_env['PDG_ITEM_ID'] = str(work_item.id) job_env['PDG_DIR'] = str(self.workingDir(False)) job_env['PDG_TEMP'] = temp_dir job_env['PDG_SCRIPTDIR'] = str(self.scriptDir(False)) # run the given command in a shell proc = subprocess.Popen(item_command, shell=True, env=job_env)
PDG/TOPs » Python Scheduler's number of concurrent operations
- tpetrick
- 596 posts
- Offline
On the local scheduler, self.run_list contains the list of active work items. During onSchedule the work item is spawned using subprocess.Popen, which is non-blocking, and added to self.run_list. During onTick the scheduler iterates over all of the entries in the running item list, and calls poll(..) on each process to check the status of that process.
Here is a very basic example of how you might implement in that using the Python Scheduler:
onSchedule:
onTick
This simple example has no limit on the number of work items that can run at a time. If you want to limit them, your scheduling code needs to check the number of active work items and return pdg.scheduleResult.Deferred or pdg.scheduleResult.FullDeferred if it wishes to defer available work items until later. It also does not handle batch work items, configuring the job environment, etc.
Here is a very basic example of how you might implement in that using the Python Scheduler:
onSchedule:
import subprocess self.createJobDirsAndSerializeWorkItems(work_item) item_command = self.expandCommandTokens(work_item.command, work_item) proc = subprocess.Popen(item_command, shell=True) if not hasattr(self, "__runlist"): self.__runlist = [] self.__runlist.append((proc, work_item.id)) self.workItemStartCook(work_item.id, -1) print("Starting {}".format(work_item.id)) return pdg.scheduleResult.Succeeded
onTick
if hasattr(self, "__runlist"): for entry in self.__runlist: exit_code = entry[0].poll() if exit_code is not None: self.__runlist.remove(entry) print("Done {} with status {}".format(entry[1], exit_code)) if exit_code == 0: self.workItemSucceeded(entry[1], -1, 0) else: self.workItemFailed(entry[1], -1, 0)
This simple example has no limit on the number of work items that can run at a time. If you want to limit them, your scheduling code needs to check the number of active work items and return pdg.scheduleResult.Deferred or pdg.scheduleResult.FullDeferred if it wishes to defer available work items until later. It also does not handle batch work items, configuring the job environment, etc.
Edited by tpetrick - April 12, 2022 11:32:23
PDG/TOPs » ffmpeg gif compiling gif frames out of order
- tpetrick
- 596 posts
- Offline
There are a few issues with the TOP network in that file.
The first issue is that the Wedge node creates three wedges, with integer "seed" values of 0, 0, and 1. That means that the second set of 30 frames is a duplicate of the first set of 30. Since the Wait for All is configured to split by the "seed" attribute, the first partition in that node ends up with 60 work items in it. The partition will therefore contain two frame 1 work items, two frame 2s, etc. To fix that, either a) the Wedge should be changed to not create duplicate values for "seed" or b) the Wait for All should instead by configured to split by the "wedgeindex" attribute.
The second issue, which is also the main problem, is that your Wait for All has the "Sort Contents" parameter set to "None". That means the partitions in that node will be completely unsorted/in random order. The output files on the partition match the sort order of the work items in that partition, so the file order will also be jumbled. The FFMPEG node uses the files in the order that they're specified on the input, so that also affects the .gif. Setting "Sort Contents" back to the default "Work Item Index", along with the change mentioned in the previous paragraph, should fix your .gifs.
The first issue is that the Wedge node creates three wedges, with integer "seed" values of 0, 0, and 1. That means that the second set of 30 frames is a duplicate of the first set of 30. Since the Wait for All is configured to split by the "seed" attribute, the first partition in that node ends up with 60 work items in it. The partition will therefore contain two frame 1 work items, two frame 2s, etc. To fix that, either a) the Wedge should be changed to not create duplicate values for "seed" or b) the Wait for All should instead by configured to split by the "wedgeindex" attribute.
The second issue, which is also the main problem, is that your Wait for All has the "Sort Contents" parameter set to "None". That means the partitions in that node will be completely unsorted/in random order. The output files on the partition match the sort order of the work items in that partition, so the file order will also be jumbled. The FFMPEG node uses the files in the order that they're specified on the input, so that also affects the .gif. Setting "Sort Contents" back to the default "Work Item Index", along with the change mentioned in the previous paragraph, should fix your .gifs.
Edited by tpetrick - April 11, 2022 18:25:15
PDG/TOPs » Geometry Import - Wedge Attribute
- tpetrick
- 596 posts
- Offline
It's hard to say from screenshots what the problem is. Can you attach a .hip file that demonstrates the issue?
PDG/TOPs » ffmpeg gif compiling gif frames out of order
- tpetrick
- 596 posts
- Offline
PDG/TOPs » Python Scheduler's number of concurrent operations
- tpetrick
- 596 posts
- Offline
Your onSchedule implementation needs to return immediately -- it can't wait for the item to cook. It should instead submit the work item to whatever will be cooking it, and then return immediately.
The way it works for the local scheduler, for example, is the onSchedule function spawns a child process for the work item, and then returns as soon as the API call to start the process completes. The scheduler's onTick callback then checks the status of all actively running work item processes. For ones that have completed it marks the work item as succeeded/failed based on the return code of the process. It ignores processes that are still running, unless they've hit the run limit in which case it kills them.
The Python code for the local scheduler is available in $HFS/houdini/pdg/types/schedulers/local.py for reference. Note that local scheduler stores a list of actively running processes as a member variable on itself. Also note that the local scheduler uses subprocess.Popen to spawn the process, without waiting on it to complete, and uses process.poll(..) to poll the status of a running process at a later point.
With farm schedulers like HQ or Tractor, the onSchedule method works in the same way. It makes the appropriate farm scheduler API call to create a new job, and returns as soon as that job is accepted by the farm system. It doesn't wait for the job to cook. When the job finishes, typically the farm job itself notifies the scheduler over RPC that the work item has finished -- for example with HQ's per-job Success of Fail callbacks.
The way it works for the local scheduler, for example, is the onSchedule function spawns a child process for the work item, and then returns as soon as the API call to start the process completes. The scheduler's onTick callback then checks the status of all actively running work item processes. For ones that have completed it marks the work item as succeeded/failed based on the return code of the process. It ignores processes that are still running, unless they've hit the run limit in which case it kills them.
The Python code for the local scheduler is available in $HFS/houdini/pdg/types/schedulers/local.py for reference. Note that local scheduler stores a list of actively running processes as a member variable on itself. Also note that the local scheduler uses subprocess.Popen to spawn the process, without waiting on it to complete, and uses process.poll(..) to poll the status of a running process at a later point.
With farm schedulers like HQ or Tractor, the onSchedule method works in the same way. It makes the appropriate farm scheduler API call to create a new job, and returns as soon as that job is accepted by the farm system. It doesn't wait for the job to cook. When the job finishes, typically the farm job itself notifies the scheduler over RPC that the work item has finished -- for example with HQ's per-job Success of Fail callbacks.
Edited by tpetrick - April 7, 2022 18:23:33
-
- Quick Links