Houdini 20.5 Nodes TOP nodes

Local Scheduler TOP node

Schedules work items on the local machine.

Since 17.5

This node is the default scheduler that executes work items on your local machine. This node also runs its scheduled work items in parallel.

This node lets you:

These parameters are not related to the multi-threading of individual processes. To control the multi-threading of Houdini work items, use the Houdini Max Threads parameter.

Tip

To execute work items on a farm or on remote machines, use a different scheduler like the HQueue Scheduler.

Parameters

Scheduler

Global parameters for all work items using this scheduler.

Scheduling

Total Slots

Specifies the number of available slots for this scheduler to use when executing work items. This also provides some default slot counts based on the detected CPU. A higher slot count means more work items can run at once.

For more information, see limiting resource usage.

Equal to 1/4 of Total CPU Count

Use the number of logical cores divided by four.

Equal to CPU Count Less One

Use the number of logical cores less 1.

Custom Slot Count

Use the number specified in the Custom Slot Count field.

Custom Slot Count

The number of slots available to execute work items. A positive number sets the slot count. A negative number sets the slot count as the total logical CPU cores minus the number specified in this field.

Verbose Logging

Print extra debugging information in job output logs.

Limit Jobs

When enabled, sets the maximum number of jobs that can be submitted by the scheduler at the same time.

For farm schedulers like Tractor or HQueue, this parameter can be used to limit the total number of jobs submitted to the render farm itself. Setting this parameter can help limit the load on the render farm, especially when the PDG graph has a large number of small tasks.

Block on Failed Work Items

When on, if there are any failed work items on the scheduler, then the cook is blocked from completing and the PDG graph cook is prevented from ending. This allows you to manually retry your failed work items. You can cancel the scheduler’s cook when it is blocked by failed work items by pressing the ESC key, clicking the Cancels the current cook button in the TOP tasks bar, or by using the cancel API method.

Paths

Working Directory

Specifies the relative directory where the work generates intermediate files and output. The intermediate files are placed in a subdirectory. For the Local Scheduler or HQueue, typically $HIP is used. For other schedulers, this should be a relative directory to Local Shared Root Path and Remote Shared Root Path; this path is then appended to these root paths.

Hython

Determines which Houdini Python interpreter (hython) is used for your Houdini jobs. You can also specify this hython in a command using the PDG_HYTHON token.

Default

Use the default hython interpreter that is installed with Houdini.

Custom

Use the executable path specified by the Hython Executable parameter.

Hython Executable

This parameter is only available when Hython is set to Custom.

The full path to the hython executable to use for your Houdini jobs.

Load Item Data From

Determines how jobs processed by this scheduler should load work item attributes and data.

Temporary JSON File

The scheduler writes out a .json file for each work item to the PDG temporary file directory. This option is selected by default.

RPC Message

The scheduler’s running work items request attributes and data over RPC. If the scheduler is a farm scheduler, then the job scripts running on the farm will also request item data from the submitter when creating their out-of-process work item objects.

This parameter option removes the need to write data files to disk and is useful when your local and remote machines do not share a file system.

Compress Work Item Data

When on, PDG compresses the work item .json files when writing them to disk.

This parameter is only available when Load Item Data From is set to Temporary JSON File.

Path Mapping

Global

If the PDG Path Map exists, then it is applied to file paths.

None

Delocalizes paths using the PDG_DIR token.

Path Map Zone

When on, specifies a custom mapping zone to apply to all jobs executed by this scheduler. Otherwise, the local platforms are LINUX, MAC or WIN.

Validate Outputs When Recooking

When on, PDG validates the output files of the scheduler’s cooked work items when the graph is recooked to see if the files still exist on disk. Work items that are missing output files are then automatically dirtied and cooked again. If any work items are dirtied by parameter changes, then their cache files are also automatically invalidated. Validate Outputs When Recooking is on by default.

Check Expected Outputs on Disk

When on, PDG looks for any unexpected outputs (for example, like outputs that can result from custom output handling internal logic) that were not explicitly reported when the scheduler’s work items finished cooking. This check occurs immediately after the scheduler marks work items as cooked, and expected outputs that were reported normally are not checked. If PDG finds any files that are different from the expected outputs, then they are automatically added as real output files.

Temp Directory

Location

Determines where the local temporary files are written to. Files that are written to this location are needed for the PDG cook, but are not typically the end product and can be removed when the cook completes.

For example, log files and python scripts are some of the files that are usually written during the cook.

Working Directory

Use pdgtemp subdirectory specified in the Working Directory field.

Houdini Temp

Use pdgtemp subdirectory of $HOUDINI_TEMP_DIR.

Custom

Use the custom directory specified by in the Custom field.

Append PID

When on, a subdirectory is added to the location specified by the Location parameter and is named after the value of your Houdini session’s PID (Process Identifier). The PID is typically a 3-5 digit number.

This is necessary when multiple sessions of Houdini are cooking TOP graphs at the same time.

Custom

The full path to the custom temporary directory.

This parameter is only available when Directory Location is set to Custom.

Delete Temp Dir

Determines when PDG should automatically delete the temporary file directory associated with the scheduler.

Never

PDG never automatically deletes the temp file directory.

When Scheduler is Deleted

PDG automatically deletes the temp file directory when the scheduler is deleted or when Houdini is closed.

When Cook Completes

PDG automatically deletes the temp file directory each time a cook completes.

RPC Server

Parameters for configuring the behavior of RPC connections from out of process jobs back to a scheduler instance.

Ignore RPC Errors

Determines whether RPC errors should cause out of process jobs to fail.

Never

RPC connection errors will cause work items to fail.

When Cooking Batches

RPC connection errors are ignored for batch work items, which typically make a per-frame RPC back to PDG to report output files and communicate sub item status. This option prevents long-running simulations from being killed on the farm, if the submitter Houdini session crashes or becomes unresponsive.

Always

RPC connection errors will never cause a work item to fail. Note that if a work item can’t communicate with the scheduler, it will be unable to report output files, attributes or its cook status back to the PDG graph.

Max RPC Errors

The maximum number of RPC failures that can occur before RPC is disabled in an out of process job.

Connection Timeout

The number of seconds to wait when an out of process jobs makes an RPC connection to the main PDG graph, before assuming the connection failed.

Connection Retries

The number of times to retry a failed RPC call made by an out of process job.

Retry Backoff

When Connection Retries is greater than 0, this parameter determines how much time should be spent between consecutive retries.

Batch Poll Rate

Determines how quickly an out of process batch work item should poll the main Houdini session for dependency status updates, if the batch is configured to cook when it’s first frame of work is ready. This has no impact on other types of batch work items.

Release Job Slot When Polling

Determines whether or not the scheduler should decrement the number of active workers when a batch is polling for dependency updates.

Job Parms

Job-specific parameters.

Tip

You can override these parameters per node with the Edit Parameter Interface. For more information, see Scheduler Job Parms / Properties.

Scheduling

Single

When on, only one single work item is executed at a time.

Slots Per Work Item

When on, sets the number of slots consumed by each work item. Work items are only run by the scheduler if at least this number of slots are available.

Note

The total number of slots that are available to the scheduler is determined by the Total Slots parameter setting.

If some of your tasks consume a lot of computational or memory resources, you can use the Slots Per Work Item parameter to change the maximum number of processes that are run in parallel. For example, if there are 8 slots available as determined by Total Slots, then a maximum of 8 processes will be executed in parallel. However, if Slots Per Work Item is set to 2 on the processor node, then a maximum of 4 processes will be executed in parallel with each task consuming 2 slots worth of resources in the scheduler.

Minimum Available Memory

Specifies the amount of available memory that is required to start a job. This allows you to delay a job from starting until a specific amount of memory is available.

Rule

No Minimum

No check is performed for available memory.

MB Available

Check for the specified Minimum MB.

Percent Available

Check for the specified Minimum Percent.

Minimum MB

Sets the minimum amount of available memory in Megabytes (MBs). Available memory is the amount of memory that can be used by a process without going into swap.

The parameter is only available when Rule is set to MB Available.

Minimum Percent

Sets the minimum amount of available memory as a percentage of the system’s total memory. Available memory is the amount of memory that can be used by a process without going into swap.

The parameter is only available when Rule is set to Percent Available.

Tasks

When a work item process terminates with a non-zero exit code, it is marked as failed by default. These parameters change that behavior.

On Task Failure

Determines what happens when a work item fails.

Report Error

The work item fails and an error message is added to the node.

Report Warning

The work item succeeds and a warning message is added to the node.

Retry Task

The work item restarts immediately, according to the Maximum Retries and Retry Count Attribute parameter settings.

Ignore

The work item succeeds and no message is issued.

Handle All Non Zero

When off, lets you specify a particular exit code with the Exit Code field. All other non-zero exit codes are regarded as failures.

Exit Code

Specifies the exit code that is handled by the On Task Failure parameter setting. All other non-zero exit codes are treated as failures.

This parameter is only available when Handle All Non Zero is off.

Maximum Retries

Sets the maximum number of times the work item will be restarted.

Retry Count Attribute

When on, adds an int attribute set to the number of times the task was restarted.

Maximum Run Time

When on, this parameter determines the maximum time in seconds that a work item can run. When the time limit is exceeded, the work item’s process is terminated.

On Task Timeout

Determines what status to set on the work items that timed out.

This parameter is only available when Maximum Run Time is enabled.

Mark as Failed

Sets the work item’s status to Failed.

Mark as Succeeded

Sets the work item’s status to Succeeded, and writes a message to the work item’s log indicating that it was killed due to the time limit.

Maximum Memory

When on, this parameter determines the maximum memory in megabytes that a work item can use before its process is terminated.

On Memory Exceeded

Determines what status to set on the work items that exceed the Maximum Memory limit.

Mark as Failed

Sets the work item’s status to Failed.

Mark as Succeeded

Sets the work item’s status to Succeeded, and writes a message to the work item’s log indicating that it was killed due to the memory limit.

Task Environment

Houdini Max Threads

When on, sets the maximum number of threads each work item can use. This also sets the HOUDINI_MAXTHREADS environment value which is used by Houdini-based programs like Mantra, Karma, Hython, and HBatch.

Requires GUI Window

Normally, processes are started such that they do not pop up command windows on the desktop when they run. However, some Windows applications require a GUI window.

Windows

When on, your work items can run GUI Applications in pop-up windows.

Skip Loading Packages

When on, packages are not loaded into processes created by the local scheduler. Processes spawned by the local scheduler inherit the environment of the main Houdini session and usually do not need to load in Houdini packages again.

Unset Variables

Specifies a space-separated list of environment variables that should be unset in or removed from the scheduler’s task environment.

Environment File

Environment Variables

Additional work item environment variables can be specified here. These will be added to the job’s environment. If the value of the variable is empty, it will be removed from the job’s environment.

Name

Name of the work item environment variable.

Value

Value of the work item environment variable.

Specifies an environment file for environment variables to be added to the job’s environment. An environment variable from the file will overwrite an existing environment variable if they share identical names.

Environment Variables

Additional work item environment variables can be specified here. These will be added to the job’s environment. If the value of the variable is empty, it will be removed from the job’s environment.

Name

Name of the work item environment variable.

Value

Value of the work item environment variable.

Specifies an environment file for environment variables to be added to the job’s environment. An environment variable from the file will overwrite an existing environment variable if they share identical names.

See also

TOP nodes