|On this page|
This node generates work items that cook an HDA using Hython. The values of the HDA’s parameters can be specified in the HDA Parameters tab of the node, which will then be applied as attributes on the work item and used in the job environment to configure the HDA before cooking.
HDA Processor currently supports Object, Sop, Cop2, and Lop type HDAs.
HDA Processor will create attributes on work items for all spare parms added to the HDA Parameters folder on the node. Additionally, it will create the following built-in attributes:
Indicates the Batch Mode selection (0 = Off, 1 = All Items in One Batch, 2 = Custom Batch Size).
Indicates the connection timeout limit (ms) when using the HDA Processor service.
Indicates the Cook Batch When selection (0 = All Items are Ready, 1 = First Item is Ready).
The path to the digital asset that will be cooked in the work item’s job.
The operator type within the digital asset. If an operator type is not specified using the Operator Type parameter, this attribute will be set to the empty string, and the work item will use the first operator in the .hda.
Indicates the Create File Inputs choice (0 = Do not create file inputs, 1 = Create file inputs).
Indicates the Missing Input choice. (0 = Raise Error, 2 = Ignore).
Specifies the number of file inputs that will be inputted to the HDA.
Indicates the Write Outputs selection ( 0 = Do not write outputs, 1 = Write outputs).
Indicates whether to save a debug .hip file (0 = Do not save .hip file, 1 = Save .hip file)
A list of paths to the file inputs that will be inputted to the HDA.
Specifies the number of output files that will be written from the HDA.
When cooking an Object level HDA, this specifies the name of the SOP that is the source of the output file(s) to write out. If this empty, the HDA Processor job will attempt to find a valid node to output and issue a warning in the job log.
A list of paths that specify the files to be written out.
A list of file tags that specify the result tag for each output file.
A list of names of all the float attributes that specify an HDA Parameter float value.
A list of names of all the integer attributes that specifify an HDA Parameter integer value.
A list of names of all the string attributes that specify an HDA Parameter string value.
A list of names of all the integer attributes that specify an HDA Parameter button value. Note that buttons get converted to a toggle within the HDA Parameters tab. When that toggle is enabled, it means that HDA Processor will press the button before cooking.
This parameter can be used to select an instance of a digital asset within the current Houdini session. All of the template node’s non-default parameter values will be applied to the HDA Processor’s HDA Parameters.
The path of the HDA to cook. This path can be absolute or it can be relative to HOUDINI_PATH. For example, if an asset–
myasset.hda–is in a sub-directory
hou.homeHoudiniDirectory(), you can specify the HDA as
Any HDAs that are added to this node’s File Dependencies will automatically be copied to the
$PDG_TEMP/otls directory. Any nested HDAs that are required by the HDA being cooked should be added as a File Dependency if they are not otherwise available on the user’s HOUDINI_PATH.
The operator type to select from within the digital asset. If no operator type is specified, HDA Processor will pick the first type it finds when cooking the asset.
Update HDA Parameters
When pressed, this button updates the HDA Parameters section.
Filter HDA Parameters
Opens a dialog that can be used to configure which parameters from the asset should be included in the HDA Parameters tab of this node. Any parameters that are included in the HDA Parameters section of the node will have their values set when HDA Processor cooks the HDA.
Work Item Generation
Whether this node generates static or dynamic work items. You should generally leave this set to "Automatic" unless you know the node’s work items can be computed statically, or that they need to be generated dynamically.
This node always creates dynamic work items: it waits until the upstream work items are known, and generates new work items from the upstream work items.
This node always creates static work items: it creates the number of work items it thinks it needs based on the parameters (and any upstream static items) before the network runs.
If the input is static (a static processor, or a partitioner with only static inputs, or a mapper), this node generates static work items, otherwise it generates dynamic work items.
How the processor node handles work items that report expected file results.
If the expected result file exists on disk, the work item is marked as cooked without being scheduled. If the file does not exist, the item is scheduled as normal.
If the expected result file exists on disk, the work item is marked as cooked without being scheduled. Otherwise the work item is marked as failed.
Work items are always scheduled and the excepted result file is ignored, even if it exists on disk.
No batching is performed. Each work item is submitted as its own individual job.
All Items in One Batch
All work items will be submitted as a single batch job and take place in a single Hython session. This option can significantly increase the speed at which the node finishes cooking, especially if the HDA has a relatively quick cooking time such that the time to start up a Hython process is longer than the time it takes to cook the HDA.
Custom Batch Size
Work items will be submitted as batch jobs that are the size specified by the Batch Size parameter. This option can significantly increase the performance of the node since it allows for various optimizations to be performed (such as only having to instantiate the HDA once per batch job).
When Batch Mode is set to Custom Batch Size, this specifies the size of each batch. If there is underflow in the last batch, the size of the last batch will be the remaining work items.
Cook Batch When
Determines when batches of work items are scheduled. By default, the batch will be scheduled once the dependencies for all work items are cooked. However, it is possible to schedule the batch as soon as the first work item can run.
All Items are Ready
The batch will only be scheduled once all dependencies on all work items in the batch are satisfied.
First Item is Ready
The batch will be scheduled as soon as the dependencies for its first work item are ready. The HDA Processor wrapper script used by PDG will communicate back to PDG as the job is running to check the status of the dependencies before cooking each work item in the batch. This incurs slightly more network overhead, and requires ongoing communication between the job and PDG.
Create File Inputs
When this parameter is enabled, one or more file nodes will be instantiated and wired as inputs to the asset. This can be used to chain multiple HDA Processor nodes together more easily. When this option is enabled, the HDA type must be able to support file inputs (for example, OBJ level HDAs will not work with this option).
Input File Source
When Create File Inputs is enabled, this selects the source of the input files.
Upstream Output Files
The upstream output files with the tag specified in the File Tag parameter will be used as inputs to the HDA. This option is particularly useful when the number of inputs potentially vary.
Custom File Paths
When this mode is selected, the input files to the HDA must be manually specified using the Number of Inputs multiparameter.
Selects how HDA Processor behaves when one or more of its inputs is missing.
If an input is missing on an input File node, the File node will error out and cause the cook to fail.
The File nodes will be set to produce "No Geometry" if an input file is missing from disk and no error will be raised.
When the file source is set to Upstream Output Files, this specifies the file tag of the upstream output files to use.
Number of Inputs
The number of input file nodes to create.
Input File #
The path to the input file that will be loaded in by the created file node.
This parameter is purely for UI purposes. When an HDA is selected, HDA Processor will attempt to set the HDA type, therefore giving the user more information about what type of asset they are working with, as well as deactivating any of the parameters that are not required to be filled for that type of asset.
When this toggle is turned on, HDA Processor will output geometry to disk from the display node in the asset. If the asset contains a COP network or a ROP network, it may be useful to turn this off.
When the HDA is an Object level asset, an operator path or SOP name must be provided so that HDA Processor knows what geometry to output. It is recommended to use an operator path specified relative to the top level node of the HDA. For example, if you have an Object level asset with a geometry node contained within it named geometry_to_export (and you want to write out the geometry of that node to a file), the SOP path should be specified as
Number of Outputs
Since SOPs can have more than 1 output, this parameter is used to specify the number of outputs.
Output File Name #
When writing outputs, this is the name of the output file.
Output Tag #
When writing outputs, this is the result tag assigned to the result file.
Save Debug .hip File
When this toggle is enabled, a debug .hip file containing the instantiated asset, the file nodes and all parameter values will be written to disk. This is useful for tracking down problems with the asset. Be careful not to leave this on because it will significantly slow down execution.
Use HDA Processor Service
With this enabled, HDA Processor will cook using the HDA Processor service. Note that there must be an active HDA Processor service available. See the PDG Service Manager documentation for more information on using services. If the node is unable to connect to the service, HDA Processor will cook in regular mode and add a warning to the node.
Connection Timeout (ms)
Specifies the connection timeout length (in milliseconds) for HDA Processor jobs. If this time is exceeded during the cook of the work item, the connection will time out and the job will fail.
TOP Scheduler Override
This parameter overrides the TOP scheduler for this node.
Work Item Priority
This parameter determines how the current scheduler prioritizes the work items in this node.
Inherit From Upstream Item
The work items inherit their priority from their parent items. If a work item has no parent, its priority is set to 0.
The work item priority is set to the value of Priority Expression.
Node Defines Priority
This parameter is only available when Work Item Priority is set to Custom Expression.
This parameter specifies an expression for work item priority. The expression is evaluated for each work item in the node.
This section of the node is filled in automatically based on the parm interface of the selected digital asset. Parameters set here will be copied to work item attributes and applied to the asset when it is cooked.
A list of files that should be copied to the PDG working directory before the first work item in this node is executed. This can be used to ensure that supporting files like digital assets and custom scripts are available for the work item job.
The specified paths can be absolute or relative to HOUDINI_PATH.