Houdini 18.0 Nodes TOP nodes

Deadline Scheduler TOP node

The PDG scheduler for Thinkbox’s Deadline software.

On this page

Overview

This scheduler node utilizes Thinkbox’s Deadline to schedule and execute PDG work items on a Deadline farm.

To use this scheduler, you need to have the Deadline client installed and working on your local machine. Additionally, you must also have Deadline set-up on your farm machines to receive and execute jobs.

Deadline 10.0.16.6 is the most recent version we have tested.

Scheduling

This node can do two types of scheduling using a Deadline farm:

  • The first type of scheduling runs an instance of Houdini on a local submission machine that coordinates the PDG cook on the farm and waits until the cook has completed. This allows you to cook the entire TOP graph, only part of the TOP graph, or a specific node.

    With this scheduling type, the node also schedules one main job with a task for each work item generated, and if necessary, schedules a second Message Queue (MQ) job to run the MQ server that receives the work item results over the network. For more information on MQ, see the Message Queue section below.

  • The second type of scheduling uses Submit Graph As Job.

    With this scheduling type, the node schedules a hython session that opens up the current .hip file and cooks the entire TOP network. If the TOP network uses the Deadline scheduler node(s), then another job might also be scheduled on the farm, and that job will follow the scheduling behavior mentioned above.

PDGDeadline plug-in

By default, this Deadline scheduler requires and uses the custom PDGDeadline plugin (located in $HFS/houdini/pdg/plugins/PDGDeadline) that ships with Houdini. You do not have to set-up the plugin to enable it as it should work out-of-the-box. Please note that the rest of this documentation page assumes that you are using the PDGDeadline plugin.

On Windows, Deadline processes require executables to have the .exe suffix. To meet this requirement, you can append \$PDG_EXE to executables.

The PDGDeadline plugin evaluates the \$PDG_EXE specified in work item executables as follows

Windows

$PDG_EXE is replaced by .exe.

For example, hython will be framed between $HFS and \$PDG_EXE and evaluated on Windows as:

\$HFS/bin/hython\$PDG_EXEC:/Program Files/Side Effects Software/Houdini 17.5.173/bin/hython.exe

Mac

$PDG_EXE is removed.

Linux

$PDG_EXE is removed.

If you set the PATH environment in a work item, to support cross-platform farms the path separators will be replaced with __PDG_PATHSEP__ in the task specification file, then when running the task on the farm, PDGDeadline plugin will convert __PDG_PATHSEP__ to the local OS path separator. Note that the work item’s PATH, if set, will override the local machine’s PATH environment.

Installation

  1. Install the Deadline client on the machine from which you will cook the TOP network. Refer to Thinkbox’s instructions for how to install Deadline on each platform.

  2. Make sure of the following:

    • The deadlinecommand executable is working. See the $DEADLINE_PATH note below.

    • The Deadline repository is accessible on the machine where the TOPs network will cook (either the repository is local, or the network mount/share it’s on is available locally).

    • If you are using a mixed farm set-up (for example, a farm with any combination of Linux, macOS, and Windows machines), then set the following path mapping for each OS:

      • Navigate to Deadline’s Configure Repository Options > Mapped Paths.

      • On Windows:

        • Set up path mapping for $HFS to the Houdini install directory on Deadline Worker machines.

        • Set up path mapping for $PYTHON to <expanded HFS path>/bin/python.exe or your local Python installation.

      • On macOS and Linux:

        • Set up path mapping for $HFS to the Houdini install director, or override the default parm values for Hython and Python in the Job Parms interface.

          Use \ in front of $HFS to escape Houdini’s local evaluation. Using \$HFS makes sure that the node evaluates the variable on the farm machine running the job.

        • Repeat for any other variable that needs to be evaluated on the farm machine.

  3. Set the $DEADLINE_PATH variable to point to the Deadline installation directory.

    • If DEADLINE_PATH is not set:

      • You can add the Deadline installation directory to the system path.

      • On macOS, the node falls back to checking the standard Deadline install directory.

TOP Attributes

deadline_job_task_id

string

When the schedule submits a work item to Deadline, it adds this attribute to the work item in order to track the Deadline job and task IDs.

Parameters

Scheduler

Global parameters for all work items.

Submit Graph

Allows you to schedule a standalone job (a job that does not require Houdini to be running) on your Deadline farm that cooks the current TOP network.

The parameters below determine the behavior of this job.

Job Name

Specifies the name to use for the job.

Use MQ Job Options

When on, the node uses the job settings specified by the __MQ Job Options__ parameters.

Submit Graph As Job

Cooks the current TOP network as a standalone job in a hython session on your Deadline farm.

Data Layer Server

Enable Server

When on, turns on the data layer server for the TOP job that cooks on the farm. This allows PilotPDG or other WebSocket clients to connect to the cooking job remotely to view the state of PDG.

Server Port

This parameter is only available when Enable Server is on.

Determines which server port to use for the data layer server.

Automatic

A free TCP port selected by the node.

Custom

A custom TCP port specified by the user.

This is useful when there is a firewall between the farm machine and the monitoring machine.

Auto Connect

This parameter is only available when Enable Server is on.

When on, the node will try to send a command to create a remote visualizer when the job starts.

If successful, a remote graph is created and it automatically connects to the server executing the job. The client submitting the job must be visible to the server running the job or the connection will fail.

Working Directory

Local Shared Path

Specifies the root path on your local machine that points to the directory where the job generates intermediate files and output. The intermediate files are placed in a subdirectory.

Use this parameter if your local and farm machines have the same path configuration.

Remote Shared Path

When on, overrides the path to the mounted directory on the Deadline Worker machines that the working directory is rooted at and lets you specify a different path.

This path can include variables that allow the node to resolve to platform-specific paths (if using a multi-platform farm). If using the default value of $PDG_DIR, then $PDG_DIR should be mapped to the actual mounted value for each operating system used by Workers in Deadline’s path mapping. You can also set this path to a value that Workers are already using in an existing farm.

Use this parameter if your farm machines have different path configurations than your local submission machine. This is required for mixed farms where the mounted paths are different due to OS differences.

Job Description

Job description properties that will be written to a Deadline job file.

Batch Name

(Optional) Specifies the batch name under which to group the job.

Job Name

(Required) Specifies the name of the job.

Comment

(Optional) Specifies the comment to add to all jobs.

Job Department

(Optional) Specifies the default department (for example, Lighting) for all jobs. This allows you to group jobs together and provide information to the farm operator.

Job Options

Pool

Specifies a named pool to use to execute the job.

By default, Pool is none.

Group

Specifies a named group to use to execute the job.

By default, Group is none.

Priority

Specifies the priority for all new jobs.

The minimum value is 0 and the maximum value is a setting from Deadline’s repository options (usually 100).

By default, Priority is 50.

Concurrent Tasks

Specifies the number of tasks to run simultaneously for each Deadline Worker.

By default, Concurrent Tasks is 1 (one task at a time).

Limit Concurrent Tasks to CPUs

When on, limits the number of concurrent tasks to the number of CPUs on the Deadline Worker or the current CPU Affinity settings in Deadline.

Pre Job Script

Specifies the path to the Python script to run when the job starts.

Post Job Script

Specifies the path to the Python script to run after the job finishes.

Machine Limit

Specifies the maximum number of Deadline Worker machines that can execute this job.

By default, Machine Limit is 0 (no limit).

Machine List

Specifies a restricted list of Deadline Workers that can execute this job. The kind of list that is written out is determined by the Machine List is A Blacklist parameter below.

If the Machine List is A Blacklist parameter is off, the list is written out as Whitelist in the job info file. If Machine List is A Blacklist is on, it is written out as Blacklist.

Machine List is a Blacklist

When on, the Machine List is written out as Blacklist. This means that the listed machines are not allowed to execute this job. When off, only the machines in the list are allowed to execute this job.

Limits

Specifies the Deadline Limits (Resource or License type) required for the scheduled job. The limits are created and managed through the Deadline Monitor in Deadline.

On Job Complete

Determines what happens to with the job’s information when it finishes.

By default, On Job Complete is Nothing.

For more information, see Deadline’s documentation.

Job File Key-Values

Job Key-Values

Lets you add custom key-value options for this job.

These are written out to the job file required by the Deadline plugin.

Plugin File Key-Values

Plugin File Key-Values

Lets you add custom key-value options for the plug-in.

These are written out to the plugin file required by the Deadline plugin.

Deadline

Verbose Logging

When on, information is printed out to the console that could be useful for debugging problems during cooking.

Ignore Command Exit Code

When on, Deadline ignores the exit codes of tasks so that they always succeed, even if the tasks return non-zero exit codes (like error codes).

Force Reload Plugin

When on, Deadline reloads the plugin between frames of a job. This can help you deal with memory leaks or applications that do not unload all job aspects properly.

By default, Force Reload Plugin is off.

Monitor Machine Name

Specifies the name of the machine to launch the Deadline Monitor on when jobs are scheduled.

Advanced

Task Submit Batch Max

Sets the maximum number of tasks that can be submitted at a time to Deadline.

Increase this value to submit more tasks, or decrease this value if your number of tasks are affecting UI performance in Houdini.

Tasks Check Batch Max

Sets the maximum number of tasks that can be checked in at a time in Deadline.

Increase this value to check more tasks, or decrease this value if your number of tasks are affecting UI performance in Houdini.

Repository

Repository Path

When on, overrides the system default Deadline repository with the one you specify.

If you have a single Deadline repository or you want to use your system’s default Deadline Repository, you should leave this field empty. Otherwise, you can specify another Deadline repository to use, along with SSL credentials if required.

Connection Type

The type of connection to the repository.

Direct

Lets you specify the path to the mounted directory. For example: //testserver.com/DeadlineRepository.

Proxy

Lets you specify the URL to the repository along with the port and login information.

PDG Deadline Plugin

Plugin

When on, allows you to specify a custom Deadline plugin to execute each PDG work item. When off, the PDGDeadline plugin that ships with Houdini is used.

Do not turn on this parameter unless you have written a custom Deadline plugin that supports the PDG cooking process. The other plugins shipped with Deadline will not work out-of-the-box.

If you want to control the execution of the PDG work items' processes, then you can write a custom Deadline plugin and specify it here along with the Plugin Directory below. The custom plugin must utilize the task files written out for each work item, and set the evaluated environment variables in the process. For reference, please see the PDGDeadline.py.

Plugin Directory

This parameter is only available when Plugin is turned on.

Specifies the path to the custom Deadline plugin listed in the Plugin parameter field.

Copy Plugin to Working Directory

When on, copies the Deadline plugin files from the local Houdini installation or specified custom path to the PDG working directory so that farm machines can access them.

Do not turn on this parameter if you are using an override path and the plugin is already available on the farm.

Message Queue

The Message Queue (MQ) server is required to get work item results from the jobs running on the farm. Several types of MQ are provided to work around networking issues such as firewalls.

Type

The type of Message Queue (MQ) server to use.

Local

Starts or shares the MQ server on local machine.

If another Deadline scheduler node (in the current Houdini session) already started a MQ server locally, then this scheduler node uses that MQ server automatically.

If there are not any firewalls between your local machine and the farm machines, then we recommend you use this parameter.

Farm

Starts or shares the MQ server on the farm as a separate job.

The MQ Job Options allow you to specify the job settings.

If another Deadline scheduler node (in the current Houdini session) already started a MQ server on the farm, then this scheduler node uses that MQ server automatically.

If there are firewalls between your local machine and the farm machines, then we recommend you use this parameter.

Connect

Connects to an already running MQ server.

The MQ server needs to have been started manually. This is the manual option for managing the MQ and useful for running MQ as a service on a single machine to serve all PDG Deadline jobs.

Address

This parameter is only available when Type is set to Connect.

Specifies the IP address to use when connecting to the MQ server.

Task Callback Port

This parameter is only available when Type is set to Connect.

Sets the TCP Port used by the Message Queue Server for the XMLRPC callback API. This port must be accessible between farm blades.

Relay Port

This parameter is only available when Type is set to Connect.

Sets the TCP Port used by the Message Queue Server connection between PDG and the blade that is running the Message Queue Command. This port must be reachable on farm blades by the PDG/user machine.

MQ Job Options

These parameters are only available when Type is set to Farm.

Batch Name

#id deadline_mqjobbatchname

When on, specifies a custom Deadline batch name for the MQ job. When off, the MQ job uses the job batch name.

Job Name

(Required) Specifies the name for the MQ job.

Comment

(Optional) Specifies a comment to add to the MQ job.

Department

(Optional) Specifies the default department (for example, Lighting) for the MQ job. This allows you to group jobs together and provide information to the farm operator.

Pool

Specifies the pool to use to execute the MQ job.

By default, Pool is none.

Group

Specifies the group to use to execute the MQ job.

By default, Group is none.

Priority

Sets the priority for the MQ job.

The minimum priority is 0. The maximum priority comes from a setting in Deadline’s repository options (for example, this is usually 100).

By default, Priority is 50.

Machine Limit

Specifies the maximum number of Deadline Worker machines that can execute this MQ job.

By default, Machine Limit is 0 (no limit).

Machine List

Specifies a restricted list of Deadline Workers that can execute this MQ job. The kind of list that is written out is determined by the Machine List is A Blacklist parameter below.

If the Machine List is A Blacklist parameter is off, the list is written out as Whitelist in the job info file. If Machine List is A Blacklist is on, it is written out as Blacklist.

Machine List is a Blacklist

When on, the Machine List is written out as Blacklist. This means that the listed machines are not allowed to execute this MQ job. When off, only the machines in the list are allowed to execute this MQ job.

Limits

Specifies the Deadline Limits (Resource or License type) required for the scheduled MQ job. The limits are created and managed through the Deadline Monitor in Deadline.

On Job Complete

Determines what happens to with the MQ job’s information when it finishes.

By default, On Job Complete is Nothing.

For more information, see Deadline’s documentation.

Job Parms

Job-specific parameters that affect all submitted jobs, but can be overridden on a node-by-node basis.

For more information, see Scheduler Job Parms / Properties.

Paths

HFS

Specifies the rooted path to the Houdini installation on all Deadline Worker machines.

If you are using variables, they are evaluated locally unless escaped with \. For example, $HFS would be evaluated on the local machine, and then the result value would be sent to the farm.

To force evaluation on the Worker instead (like for a mixed farm set-up), use \$HFS and then set the following (for example) in Deadline’s Path Mapping

$HFS = C:/Program Files/Side Effects Software/Houdini 17.5.173.

Python

Specifies the rooted path to Python that should point to the required Python version installed on all Worker machines (for example, $HFS/bin/python).

If you are using variables, then you should map them in Deadline’s Path Mapping. For example, if you are using default values, then you would path map $HFS. And if on Windows, you would also add .exe or \$PDG_EXE (for a mixed farm setup) to the path mapping. So that mapping would look something like: $HFS/bin/python\$PDG_EXEC:/Program Files/Side Effects Software/Houdini 17.5.173/bin/python.exe.

Scripts

Pre Task Script

Specifies the Python script to run before executing the task.

Post Task Script

Specifies the Python script to run after executing the task.

Task Environment

Inherit Local Environment

When on, the environment variables in the current session of PDG are copied into the task’s environment.

Houdini Max Threads

When on, sets the HOUDINI_MAXTHREADS environment to the specified value. By default, Houdini Max Threads is set to 0 (all available processors).

Positive values limit the number of threads that can be used, and those values are clamped to the number of available CPU cores. A value of 1 disables multi-threading entirely, as it limits the scheduler to only one thread.

For negative values, the value is added to the maximum number of processors to determine the threading limit. For example, a value of -1 would use all CPU cores except 1.

Environment Key-Values

Allows you to add custom key-value environment variables to each task.

GPU Affinity Overrides

OpenCL Force GPU Rendering

For OpenCL nodes only.

Sets the GPU affinity based on the current Worker’s GPU setting and user specified GPUs.

GPUs Per Task

For Redshift and OpenCL nodes.

Specifies the number of GPUs to use per task. This value must be a subset of the Worker’s GPU affinity settings in Deadline.

Select GPU Devices

For Redshift and OpenCL nodes.

Specifies a comma-separated list of GPU IDs to use. The GPU IDs specified here must be a subset of the Worker’s GPU affinity settings in Deadline.

See also

TOP nodes