|On this page|
This scheduler executes work items on a farm managed by Tractor
Requires the Tractor client to be installed and set up on the local machine. The installed
site-packagesdirectory containing Python scripts must be specified in the
This scheduler can operate in two different cook modes. The normal cook mode is used when selecting cook from any of the menus or buttons in the TOP UI. It connects to your Tractor engine and creates jobs for work items as they become ready to execute. The jobs then communicate back to the submitting machine with status changes. This means the submitting Houdini session must remain open for the duration of the cook.
Alternatively, you can use the button in Submit Graph As Job to cook the entire TOP Network as a standalone job. In this mode, the submitting Houdini session is detached from the cooking of the TOP Network. The HIP file is copied if necessary, and a
hython task executes the TOP network as normal using whatever the default scheduler is for that topnet. In this mode, you will not see any updates in your current Houdini session. You should instead check the progress of your job using the Tractor web portal.
As part of the cook, a message queue (MQ) job is submitted. This job is used to communicate information from executing jobs back to the submitting machine. For this reason, your farm machines must be able to resolve the hostnames of other farm machines.
This is as simple as editing the
/etc/hosts (Linux / macOS) or
In addition, farm machines must not have firewalls between them, or you need to use the Task Callback Port parameter to specify the open port to use.
When the cook starts, the submitting machine connects to the farm machine that is running the MQ job. So farm machines also must not have firewalls between them and the submitting machine, or you need to use the Relay Port parameter to specify the open port to use.
The artist submitting work to Tractor may need to supply PDG with login information. The environment variables $TRACTOR_USER and $TRACTOR_PASSWORD will be used to authenticate with the Tractor API if they are present. The Job Owner parm sets the owner of the job. However this will be overridden by the environment variable $PDG_TRACTOR_USER if present. This can be useful when using the Submit Graph As Job workflow, because in that case PDG needs to login to the Tractor API from the blade that is actually executing the TOP Cook job. $PDG_TRACTOR_USER should be set in the Tractor client environment in that case. The Tractor Password parm should only be used for debugging and never saved in the HIP file because there is no encryption of the parm value.
When the schedule submits a work item to Tractor, it will add this attribute to the work item in order to track the Tractor Job and Task IDs. The first element is the Job
These are global parameters for all work items using this scheduler.
Cooks the entire TOP Network as a standalone job and displays the status URI for the submitted job.
By default, the submitted job uses the Tractor login and sets it to the job environment with
$TRACTOR_PASSWORD. If these are not present, then the Tractor User and Tractor Password parameter values are used.
Specifies the title of the job that is submitted.
Job Service Keys
Specifies the Tractor Service Key expression for the job that will execute the TOP graph on the farm.
You may want to use a cheaper slot for this because although executing the TOP graph requires a separate task, it does not consume much memory or CPU.
When on, turns on the data layer server for the TOP job that will cook on the farm. This allows PilotPDG or other WebSocket clients to connect to the cooking job remotely to view the state of PDG.
Determines which server port to use for the data layer server.
This parameter is only available when Enable Server is on.
A free TCP port to use for the data layer server chosen by the node.
A custom TCP port to use for the data layer server specified by the user.
This is useful when there is a firewall between the farm machine and the monitoring machine.
Specifies the Tractor server address.
Specifies the Tractor server port.
Specifies the username for the Tractor server login. This user must have permission to submit and query job status. You can override this with
Specifies the password for the Tractor server login.
This is for convenience only. When saved, the password is saved in the HIP file with no encryption.
Alternatively, you should set
$TRACTOR_PASSWORD in the Houdini environment. For more information, see how to set environment variables.
Specifies the username of the owner of the job.
Specifies the title of the top-level job for the submitted cooks.
Specifies the priority of the cook jobs.
Specifies a list of valid site-wide tiers, where each tier represents a particular global job priority and scheduling discipline.
Specifies the names of project affiliations for this job.
Max Active Tasks
When on, sets the maximum number of tasks that the PDG cook job is allowed to run concurrently.
Once spooled, delays the start of job processing until the specified jobs have completed. Multiple job ids must be space-seperated.
When on, detailed messages from scheduler binding are printed to console.
Use Session File
When on, the Tractor API will create a temporary file to avoid the need to authenticate the local user multiple times in one session of Houdini. The file will be created as
Customize what to do when the command fails (Returns a non-zero exit code).
The work item fails.
The work item succeeds and a warning is added to the node.
The work item is retried by Tractor for the number of Retries remaining.
Ignoring Exit Code
The work item succeeds.
Handle All Non Zero
When off, you can specify a particular exit code.
Specifies the exit code that you want to handle using Handle By. All other non-zero exit codes will be treated as a failure as normal.
This parameter is only available when Handle All Non Zero is off.
Number of times to retry the job when the command fails.
This parameter is only available when Handle By is set to Retrying Task.
Specifies the relative directory where the work generates intermediate files and output. The intermediate files are placed in a subdirectory. For the Local Scheduler or HQueue, typically
$HIP is used. For other schedulers, this should be a relative directory to
Local Shared Root Path and
Remote Shared Root Path; this path is then appended to these root paths.
Specifies the full path to the Python executable on the farm machines. This is used to execute the job wrapper script for PDG work items.
If the PDG Path Map exists, then it is applied to file paths.
Delocalizes paths using the
Path Map Zone
When on, specifies a custom mapping zone to apply to all jobs executed by this scheduler. Otherwise, the local platform is
Validate Output Files
When enabled, PDG will check the output files of cooked work items to see if they still exist on disk. Work items that are missing output files will be dirtied and cook again.
Specifies the path to the Houdini installation for farm machines in the NFS zone.
Specifies the path to the Houdini installation for farm machines in the UNC zone.
When on, a subdirectory is added to the location specified by the Location parameter and is named after the value of your Houdini session’s PID (Process Identifier). The PID is typically a 3-5 digit number.
This is necessary when multiple sessions of Houdini are cooking TOP graphs at the same time.
The full path to the custom temporary directory, which needs to be accessible by all blades involved in executing the Job.
Specifies the Tractor Service Key expression for the task that will execute the Message Queue Server.
You may want to use a cheaper slot for this because although the Message Queue Process requires a separate task, it does not consume much memory or CPU.
Please note that a Message Queue Task is not created when a graph is cooked via Submit Graph As Job.
Task Callback Port
When on, sets the TCP Port used by the Message Queue Server for the job callback API. The port must be accessible between farm blades.
When on, sets the TCP Port used by the Message Queue Server connection between PDG and the blade that is running the Message Queue Command. The port must be reachable on farm blades by the PDG/user machine.
These job-specific parameters affect all submitted jobs, but can be overridden on a node-by-node basis. See Scheduler Job Parms / Properties.
Service Key Expression
Specifies the job service key expression. This determines the type of blade that can run this job.
At Least Slots
Sets the minimum number of free slots that must be available on a Tractor blade in order to execute this command.
At Most Slots
When on, the maximum number of free slots that this command can use when launched. This is used as the default value Houdini Max Threads value unless explicitly set.
Houdini Max Threads
When on, sets the
HOUDINI_MAXTHREADS environment to the given value. By default,
HOUDINI_MAXTHREADS is set to the value of At Most Slots when enabled.
The default value of 0 means to use all available processors.
If the value is positive, the value limits the number of threads that can be used. A value of 1 disables multithreading entirely by limiting it to only one thread. Positive values are clamped to the number of available CPU cores.
If the value is negative, the value is added to the maximum number of processors to determine the threading limit. For example, a value of -1 uses all CPU cores except 1.
Specifies a space separated list of environment keys which are defined in the blade profiles.
A custom Task Name prefix. By default the corresponding work item name will be used. The name suffix is a value used internally by PDG for book keeping.
Maximum Run Time
Tasks that run past the maximum time limit will be killed. The default value 0 indicates no limit.
Specifies an arbitrary string that is attached to the task definition.
Specifies a launch expression to run an external application from the Tractor UI to view the in-progress result. TOPs has it’s own internal viewer registry.
Inherit Local Environment
When on, environment variables in the current session of PDG are copied into the job’s environment.
Specifies a shell script to be executed/sourced before the command is executed.
Specifies a shell script to be executed/sourced after the command is executed.
Specifies the Python script to be executed in the wrapper script before the command process is spawned.
Specifies the Python script to be executed in the wrapper script after the command process exits.
Multiparm that lets you add custom key-value environment variables for each task.