Houdini 18.0 Nodes TOP nodes

HQueue Scheduler TOP node

Schedules work items using HQueue.

On this page

This will schedule work items using HQueue in order to execute them on remote machines.

Cook Modes

This scheduler can operate in two different cook modes. The normal cook mode is used when selecting cook from any of the menus or buttons in the TOP UI. It will connect to your HQueue scheduler and create jobs for work items as they become ready to execute. The jobs will then communicate back to the submitting machine with status changes. This means the submitting Houdini session must remain open for the duration of the cook.

Alternatively the Submit Graph As Job button can be used to cook the entire TOP Network as a standalone job. In this mode, the submitting Houdini session is detached from the cooking of the TOP Network. The hip file will be copied if necessary, and a Hython process will execute the TOP network as normal, using whatever the default scheduler is for that topnet. In this mode you will not see any updates in your current Houdini session. You should instead check the progress of your job using the HQueue web portal.

Network Requirements

As part of the cook, a message queue (MQ) job will be submitted, which is used to communicate information from executing jobs back to the submitting machine. For this reason the farm machines must be able to resolve the hostnames of other farm machines.

Tip

This may be as simple as editing the /etc/hosts (Linux / macOS) or C:\Windows\System32\Drivers\etc\hosts (Windows).

In addition, farm machines must either not have firewalls between them, or you must use the Task Callback Port to specify the open port to use.

When the cook starts, the submitting machine will connect to the farm machine that is running the MQ job, so farm machines must either not have firewalls between themselves and the submitting machine, or you must use the Relay Port to specify the open port.

TOP Attributes

hqueue_jobid

integer

When the schedule submits a work item to HQueue, it will add this attribute to the work item in order to track the HQueue job id.

Parameters

These are global parameters for all work items using this scheduler.

Submit Graph As Job

Submit

Called when the scheduler should cook the entire TOP Network as a standalone job. Displays the status URI for the submitted job. The submitting Houdini session is detached from the cooking of the TOP Network. The hip file will be copied if necessary, and a Hython process will execute the TOP network as normal, using whatever the default scheduler is for that topnet.

Enable Server

Turns on the data layer server for the TOP job that cooks on the farm. This allows PilotPDG or other websocket clients to connect to the cooking job remotely to view the state of PDG.

Server Port

Automatic: Chooses a free TCP port for the data layer server.

Custom: Specify a TCP port to use for the data layer server. Useful if the farm machine is firewalled relative to the monitoring machine.

Scheduler

Working Directory

The directory where the cook will be generating intermediate files and output. The intermediate files will be placed in a subdirectory named pdgtemp.

Tip

If you are opening your .hip file in Houdini from the shared network path (for example H:/myproj/myhip.hip), you can use $HIP here (the default). However if you are opening your .hip file from a local directory (for example C:/temp/myhip.hip) it will have to be copied to the shared network before it can be accessed by farm machines. In this case this the Working Directory should be an absolute or relative path to that shared network location (for example //MYPC/Shared/myproj).

Job Name

The name of the top-level HQueue Job for submitted cooks.

Job Description

The description of the top-level HQueue Job. This can be seen in the Job Properties for the Job.

Override Local Shared Root

Enables overriding the location of the local shared root directory.

Local Shared Root Paths

The HQueue farm should be configured with a shared network filesystem and the mount point of this shared filesystem is specified for each platform.

Load from HQueue

Queries the HQueue server to retrieve the local shared root paths for each platform and fill the parameters below.

Windows

The local shared root path on Windows machines. For example I:/.

macOS

The local shared root path on macOS machines. For example /Volumes/hq.

Linux

The local shared root path on Linux machines. For example /mnt/hq.

HQueue Server

URL of the HQueue server. Example: http://localhost:5000

Universal HFS

A single path to the $HFS directory to be used by all platforms (The Houdini install directory). You can use $HQROOT and $HQCLIENTARCH to help specify the directory path.

HFS Per Platform

Linux HFS Path

$HFS path for Linux. (The Houdini install directory)

macOS HFS Path

$HFS path for macOS. (The Houdini install directory)

Windows HFS Path

$HFS path for Windows. (The Houdini install directory)

Message Queue

Task Callback Port

Set the TCP Port used by the Message Queue Server for the XMLRPC callback API. The port must be accessible between farm clients.

Relay Port

Set the TCP Port used by the Message Queue Server connection between PDG and the client that is running the Message Queue Command. The port must be reachable on farm clients by the PDG/user machine.

Job Timing

Max Items Per Tick

The maximum number of ready item onSchedule callbacks between ticks.

Tick Period

The minimum time in seconds between calls to the onTick callback.

Job Parms

These job specific parameters affect all submitted jobs, but can be overridden on a node-by-node basis. See Scheduler Overrides. Many of these parameters correspond directly to HQueue Job Properties.

Scheduling

Job Priority

The job’s HQueue priority. Jobs with higher priorities are scheduled and processed before jobs with lower priorities. 0 is the lowest priority.

Assign To

Specify clients to assign to.

Any Client: Assign to any client.

Listed Clients: Assign to specified clients.

Clients from Listed Groups: Assign to specified client groups.

Clients

Names of clients to assign jobs to, separated by space.

Select Clients

Select clients from HQueue to populate the Clients list.

Client Groups

Names of client groups to assign jobs to, separated by space.

Select Groups

Select client groups from HQueue to populate the Client Groups list.

CPUs per Job

The maximum number of CPUs that will be consumed by the job. If the number exceeds a client machine’s number of free CPUs, then the client machine will not be assigned the job. Note that multithreading of some jobs can be controlled with Houdini Max Threads. If this is not set, and if Houdini Max Threads is also not set, then the job will have the 'single' tag applied, to ensure that only one will run on a given client.

Job Description

Description property for the job.

Tags

Space-separated job tags.

Allowed Host

The hostname of the machine that the job should execute on.

Non-Zero Exit Code Handling

Handle By

Reporting Error: The work item will fail.

Reporting Warning: The work item will succeed and a warning will be added to the node.

Retrying Task: The work item will be retried by HQueue according to the Retries remaining.

Ignoring Exit Code: The work item will succeed.

Handle All Non Zero

Set this to false to specify a particular exit code.

Exit Code

Set this to the exit code that you wish to handle using Handle By. All other non-zero exit codes will be treated as a failure as normal.

Retries

Number of times to retry the job when the command fails.

Task Environment

Houdini Max Threads

Set the HOUDINI_MAXTHREADS environment to the given value. By default HOUDINI_MAXTHREADS is set to the value of CPUs per Job, if enabled.

The default of 0 means to use all available processors.

Positive values will limit the number of threads that can be used. A value of 1 will disable multithreading entirely (limiting to only one thread). Positive values will be clamped to the number of CPU cores available.

If the value is negative, the value is added to the maximum number of processors to determine the threading limit. For example, a value of -1 will use all CPU cores except 1.

See limiting resource usage.

Environment Variables

Lets you add custom key-value environment variables for each task.

See also

TOP nodes