|On this page|
This will schedule work items using HQueue in order to execute them on remote machines.
This scheduler can operate in two different cook modes. The normal cook mode is used when selecting cook from any of the menus or buttons in the TOP UI. It will connect to your HQueue scheduler and create jobs for work items as they become ready to execute. The jobs will then communicate back to the submitting machine with status changes. This means the submitting Houdini session must remain open for the duration of the cook.
Alternatively the Submit Graph As Job button can be used to cook the entire TOP Network as a standalone job. In this mode, the submitting Houdini session is detached from the cooking of the TOP Network. The hip file will be copied if necessary, and a Hython process will execute the TOP network as normal, using whatever the default scheduler is for that topnet. In this mode you will not see any updates in your current Houdini session. You should instead check the progress of your job using the HQueue web portal.
As part of the cook, a message queue (MQ) job will be submitted, which is used to communicate information from executing jobs back to the submitting machine. For this reason the farm machines must be able to resolve the hostnames of other farm machines.
This may be as simple as editing the
/etc/hosts (Linux / macOS) or
In addition, farm machines must either not have firewalls between them, or you must use the Task Callback Port to specify the open port to use.
When the cook starts, the submitting machine will connect to the farm machine that is running the MQ job, so farm machines must either not have firewalls between themselves and the submitting machine, or you must use the Relay Port to specify the open port.
Windows Services can not use mapped drives. Because HQueue jobs on Windows are executed by a Windows Service, you should use UNC paths. For example //myserver/hq/project/myhip.hip instead of H:/project/myhip.hip. Be careful with backslashes in paths because they will be interpreted as escape sequences when evaluated by Houdini or the command shell.
When the schedule submits a work item to HQueue, it will add this attribute to the work item in order to track the HQueue job id.
These are global parameters for all work items using this scheduler.
Called when the scheduler should cook the entire TOP Network as a standalone job. Displays the status URI for the submitted job. The submitting Houdini session is detached from the cooking of the TOP Network. The hip file will be copied if necessary, and a Hython process will execute the TOP network as normal, using whatever the default scheduler is for that topnet.
You can restart a finished standalone jobs using the HQueue Web UI. However to do so you should restart the child job named "TOP Cook" rather than the parent job.
Turns on the data layer server for the TOP job that cooks on the farm. This allows PilotPDG or other websocket clients to connect to the cooking job remotely to view the state of PDG.
Automatic: Chooses a free TCP port for the data layer server.
Custom: Specify a TCP port to use for the data layer server. Useful if the farm machine is firewalled relative to the monitoring machine.
When the job starts, it will try to send a command to create a remote visualizer, if it succeeds, a remote graph will be created and it will automatically connect to the server executing the job. The client submitting the job must be visible to the server running the job, or else the connection will fail.
The directory where the cook will be generating intermediate files and output. The intermediate files will be placed in a subdirectory named pdgtemp.
If you are opening your .hip file in Houdini from the shared network path (for example H:/myproj/myhip.hip), you can use $HIP here (the default). However if you are opening your .hip file from a local directory (for example C:/temp/myhip.hip) it will have to be copied to the shared network before it can be accessed by farm machines. In this case this the Working Directory should be an absolute or relative path to that shared network location (for example //MYPC/Shared/myproj).
The name of the top-level HQueue Job for submitted cooks.
The description of the top-level HQueue Job. This can be seen in the Job Properties for the Job.
The HQueue farm should be configured with a shared network filesystem and the mount point of this shared filesystem is specified for each platform.
Load from HQueue
Queries the HQueue server to retrieve the local shared root paths for each platform and fill the parameters below.
URL of the HQueue server. Example: http://localhost:5000
A single path to the $HFS directory to be used by all platforms (The Houdini install directory). You can use $HQROOT and $HQCLIENTARCH to help specify the directory path.
Linux HFS Path
$HFS path for Linux. (The Houdini install directory)
macOS HFS Path
$HFS path for macOS. (The Houdini install directory)
Windows HFS Path
$HFS path for Windows. (The Houdini install directory)
Determines if the Message Queue Job is created for interactive cooks. The Message Queue should be used when the submitting Houdini session is not reachable by farm machines due to networking restrictions or firewalls.
Automatically: The Message Queue will be created for interactive cooks. Jobs on the farm will communicate their status changes to the Message Queue, which is connected to the submitting Houdini session.
Never: The Message Queue will not be created. In this case jobs on the farm will communicate their status changes directly back to the submitting Houdini session.
Task Callback Port
Set the TCP Port used by the Message Queue Server for the XMLRPC callback API. The port must be accessible between farm clients.
Set the TCP Port used by the Message Queue Server connection between PDG and the client that is running the Message Queue Command. The port must be reachable on farm clients by the PDG/user machine.
Max Items Per Tick
The maximum number of ready item onSchedule callbacks between ticks.
The minimum time in seconds between calls to the onTick callback.
These job specific parameters affect all submitted jobs, but can be overridden on a node-by-node basis. See See Scheduler Job Parms / Properties. Many of these parameters correspond directly to HQueue Job Properties.
The job’s HQueue priority. Jobs with higher priorities are scheduled and processed before jobs with lower priorities. 0 is the lowest priority.
Specify clients to assign to.
Any Client: Assign to any client.
Listed Clients: Assign to specified clients.
Clients from Listed Groups: Assign to specified client groups.
Names of clients to assign jobs to, separated by space.
Select clients from HQueue to populate the Clients list.
Names of client groups to assign jobs to, separated by space.
Select client groups from HQueue to populate the Client Groups list.
CPUs per Job
The maximum number of CPUs that will be consumed by the job. If the number exceeds a client machine’s number of free CPUs, then the client machine will not be assigned the job. Note that multithreading of some jobs can be controlled with Houdini Max Threads. If this is not set, and if Houdini Max Threads is also not set, then the job will have the 'single' tag applied, to ensure that only one will run on a given client.
Description property for the job.
The hostname of the machine that the job should execute on.
A space-separated list of HQueue Resources that the job consumes. For example sidefx.license.render.
Reporting Error: The work item will fail.
Reporting Warning: The work item will succeed and a warning will be added to the node.
Retrying Task: The work item will be retried by HQueue according to the Retries remaining.
Ignoring Exit Code: The work item will succeed.
Handle All Non Zero
Set this to false to specify a particular exit code.
Set this to the exit code that you wish to handle using Handle By. All other non-zero exit codes will be treated as a failure as normal.
Number of times to retry the job when the command fails.
Houdini Max Threads
Set the HOUDINI_MAXTHREADS environment to the given value. By default HOUDINI_MAXTHREADS is set to the value of CPUs per Job, if enabled.
The default of 0 means to use all available processors.
Positive values will limit the number of threads that can be used. A value of 1 will disable multithreading entirely (limiting to only one thread). Positive values will be clamped to the number of CPU cores available.
If the value is negative, the value is added to the maximum number of processors to determine the threading limit. For example, a value of -1 will use all CPU cores except 1.
Inherit Local Environment
When enabled, environment variables in the current session of Houdini will be copied into the Job’s environment.
Lets you add custom key-value environment variables for each task.