Houdini 18.0 Nodes TOP nodes

Tractor Scheduler TOP node

Schedules work items using Pixar’s Tractor.

On this page

Windows

C:\Program Files\Pixar\Tractor-2.3\lib\python2.7\Lib\site-packages

Mac

/Applications/Pixar/Tractor-2.3/...

Linux

/opt/pixar/Tractor-2.3/lib/python2.7/Lib/site-packages

Cook Modes

This scheduler can operate in two different cook modes. The normal cook mode is used when selecting cook from any of the menus or buttons in the TOP UI. It will connect to your Tractor engine and create jobs for work items as they become ready to execute. The jobs will then communicate back to the submitting machine with status changes. This means the submitting Houdini session must remain open for the duration of the cook.

Alternatively the button in Submit Graph As Job can be used to cook the entire TOP Network as a standalone job. In this mode, the submitting Houdini session is detached from the cooking of the TOP Network. The hip file will be copied if necessary, and a Hython Task will execute the TOP network as normal, using whatever the default scheduler is for that topnet. In this mode you will not see any updates in your current Houdini session. You should instead check the progress of your job using the Tractor web portal.

Network Requirements

As part of the cook, a message queue (MQ) job will be submitted, which is used to communicate information from executing jobs back to the submitting machine. For this reason the farm machines must be able to resolve the hostnames of other farm machines.

Tip

This may be as simple as editing the /etc/hosts (Linux / macOS) or C:\Windows\System32\Drivers\etc\hosts (Windows).

In addition, farm machines must either not have firewalls between them, or you must use the Task Callback Port to specify the open port to use.

When the cook starts, the submitting machine will connect to the farm machine that is running the MQ job, so farm machines must either not have firewalls between themselves and the submitting machine, or you must use the Relay Port to specify the open port.

Authentication

The artist submitting work to Tractor may need to supply PDG with login information. The environment variables $TRACTOR_USER and $TRACTOR_PASSWORD will be used to authenticate with the Tractor API if they are present. The Job Owner parm sets the owner of the job. However this will be overridden by the environment variable $PDG_TRACTOR_USER if present. This can be useful when using the Submit Graph As Job workflow, because in that case PDG needs to login to the Tractor API from the blade that is actually executing the TOP Cook job. $PDG_TRACTOR_USER should be set in the Tractor client environment in that case. The Tractor Password parm should only be used for debugging and never saved in the HIP file because there is no encryption of the parm value.

TOP Attributes

tractor_id

integer

When the schedule submits a work item to Tractor, it will add this attribute to the work item in order to track the Tractor Job and Task IDs. The first element is the Job jid and the second element is the Task tid.

Parameters

These are global parameters for all work items using this scheduler.

Submit Graph As Job

Submit

Called when the scheduler should cook the entire TOP Network as a standalone job. Displays the status URI for the submitted job.

Job Title

The Title of the Job that is submitted.

Job Service Keys

The Tractor Service Key expression for the Job that will execute the TOP graph on the farm. You may want to use a cheaper slot for this because although executing the TOP graph requires a separate Task, it does not consume much memory or CPU.

Enable Server

Turns on the data layer server for the TOP job that cooks on the farm. This allows PilotPDG or other websocket clients to connect to the cooking job remotely to view the state of PDG.

Server Port

Automatic: Chooses a free TCP port for the data layer server.

Custom: Specify a TCP port to use for the data layer server. Useful if the farm machine is firewalled relative to the monitoring machine.

Scheduler

Tractor Server Hostname

The Tractor server address.

Tractor Server Port

The Tractor server port.

Tractor User

User name for Tractor server login.

Tractor Password

Password for Tractor server login.

Job Title

The title of the top-level Job for submitted cooks.

Job Priority

The priority of cook Jobs.

Tier

List of valid site-wide tiers, where each tier represents a particular global job priority and scheduling discipline.

Projects

Names of project affiliations for this job.

Max Active Tasks

Maximum number of Tasks that the PDG Cook Job is allowed to run concurrently.

Verbose Logging

When on, detailed messages from scheduler binding are printed to console.

Use Session File

When on, the Tractor API will create a temporary file to avoid the need to authenticate the local user multiple times in one session of Houdini. The file will be created as $TEMP/.pdgtractor.{user}.{host}.session.

Working Directory

The relative directory where the work will be generating intermediate files and output. The intermediate files will be placed in a subdirectory. For the Local scheduler or Hqueue, typically $HIP is used. For other schedulers, this should be a relative directory to Local Shared Root Path and Remote Shared Root Path; this path is then appended to these root paths.

Python Executable

The full path to the python executable on the farm machines. This is used to execute the job wrapper script for PDG work items.

Shared File Root Path

NFS

The path to the shared file root for farm machines in the NFS zone.

UNC (Windows)

The path to the shared file root for farm machines in the UNC zone.

$HFS Path

NFS

The path to the Houdini installation for farm machines in the NFS zone.

UNC (Windows)

The path to the Houdini installation for farm machines in the UNC zone.

Message Queue

Service Keys

The Tractor Service Key expression for the Task that will execute the Message Queue Server. You may want to use a cheaper slot for this because although the Message Queue Process requires a separate Task, it does not consume much memory or CPU. Note that a Message Queue Task is not created when a graph is cooked via Submit Graph As Job.

Task Callback Port

Set the TCP Port used by the Message Queue Server for the Job callback API. The port must be accessible between farm blades.

Relay Port

Set the TCP Port used by the Message Queue Server connection between PDG and the blade that is running the Message Queue Command. The port must be reachable on farm blades by the PDG/user machine.

Job Parms

These job specific parameters affect all submitted jobs, but can be overridden on a node-by-node basis. See Scheduler Job Parms / Properties.

Service Key Expression

The job service key expression. Used to specify the type of blade that supports running this job.

Limit Tags

The job limit tags. This is a space-separated list of strings representing the tags to be associated with every command of the job.

At Least Slots

The minimum number of free slots that must be available on a Tractor blade in order to execute this command.

At Most Slots

If enabled, the maximum number of free slots that this command will use when launched. This will be used as the default value of Houdini Max Threads unless explicitly set.

Houdini Max Threads

Set the HOUDINI_MAXTHREADS environment to the given value. By default HOUDINI_MAXTHREADS is set to the value of At Most Slots, if enabled.

The default of 0 means to use all available processors.

Positive values will limit the number of threads that can be used. A value of 1 will disable multithreading entirely (limiting to only one thread). Positive values will be clamped to the number of CPU cores available.

If the value is negative, the value is added to the maximum number of processors to determine the threading limit. For example, a value of -1 will use all CPU cores except 1.

Inherit Local Environment

When enabled, environment variables in the current session of PDG will be copied into the Job’s environment.

Tractor Env Keys

Space separated list of environment keys which are defined in the blade profiles.

Metadata

An arbitrary string that will be attached to the task definition.

Non-Zero Exit Code Handling

Handle By

Customize what to do when the command fails (Returns a non-zero exit code).

Reporting Error

The work item fails.

Reporting Warning

The work item succeeds and a warning is added to the node.

Retrying Task

The work item is retried by Tractor for the number of Retries remaining.

Ignoring Exit Code

The work item succeeds.

Handle All Non Zero

When off, you can specify a particular exit code.

Exit Code

Specifies the exit code that you want to handle using Handle By. All other non-zero exit codes will be treated as a failure as normal.

This parameter is only available when Handle All Non Zero is off.

Retries

Number of times to retry the job when the command fails.

This parameter is only available when Handle By is set to Retrying Task.

Job Environment

Additional work item environment variables can be specified here.

Job Scripts

Pre Shell

Shell script to be executed/sourced before command is executed.

Post Shell

Shell script to be executed/sourced after command is executed.

Pre Python

Python script to be exec'd in wrapper script before command process is spawned.

Post Python

Python script to be exec'd in wrapper script after command process exits.

See also

TOP nodes