Houdini 19.5 Executing tasks with PDG/TOPs

Command servers

Command blocks let you start up remote processes (such as Houdini or Maya instances), send the server commands, and shut down the server.

On this page


A “command server” is a long-running process. You can start a command server and interact with it sequentially across multiple nodes/work items within a session (between starting the server and shutting it down).

This lets you start up a remote application, such as an instance of Houdini or Maya, and send it a series of commands. For example, to open a starting geometry file, perform some operations on it, and write out the result.

(See also the HDA Processor and Geometry Import nodes, which can run Houdini geometry-generation networks as “one-shots” rather than as a command server.)

Command blocks are similar to feedback loop blocks: work items inside are run serially in sequence inside the session. The block parameters let you start up multiple serial sessions.

Houdini ships with two built-in command chain tools: one for sending Python scripts to a persistent Houdini instance, and one for sending MEL scripts to a persistent Maya instance. There is also a Python based command chain which can start, stop, and send commands to an arbitrary server using XML-RPC.


One important advantage of using Houdini as a command server over calling networks or assets is that you can use Houdini as a content creation tool. For example, you could use a Houdini server to generate a new HIP file in a branch of the TOP network, and then use the generated HIP file later in the network.

How to

  1. In a TOP network editor, press ⇥ Tab and choose a “command chain” tool, such as “Maya Command Chain”.

    The tool puts down a “Server Begin” node and a “Server End” node specific to the service type.

  2. Select the Begin node. In the parameter editor, choose how to specify the number of sessions:

    • The default is to run the number of iterations specified in the Number of sessions parameter. If the Begin node has upstream items, the command chain runs session_count times for each incoming item.

    • If the Begin node has static work items, you can turn on Session count from upstream items. This sets the number of sessions to be the number of upstream items. This repeats the command chain within the block once for each input item.

    Multiple sessions are cooked serially: the block will cook the first session from top to bottom before starting the second session.

  3. If your Begin node generates items dynamically, you must also turn on Use dynamic partitioning on the End node.

  4. Wire Command Send nodes between the start and end nodes to make them part of the loop.

Houdini draws a border around the nodes in the block to help you visualize it.


  • The Begin node generates work items that start one or more server sessions. The server process keeps running after the “startup” item completes.

  • The End node runs the server shutdown code after the last upstream command in each session completes. (If the graph stops cooking before then, the scheduler automatically terminates all known server processes.)

  • If you are using a render farm scheduler, each server session is locked to one machine.

  • You should color the start and end nodes of a block the same to make their relationship clear. The default nodes put down by the built-in command chain tools have different colors. You can change the node colors. This is especially useful to distinguish nested loops.

    The border around the block takes on the color of the end node.

  • Newly created servers are registered with the active scheduler. You can get a reference to a server by name using the scheduler API. The server itself is started by a work item, but runs much longer than the cook duration of that work item.


The following directories contain example HIP files demonstrating how to use Houdini and Maya command server blocks.

  • $HFS/houdini/help/files/pdg_examples/top_houdinipipeline

  • $HFS/houdini/help/files/pdg_examples/top_mayapipeline

Executing tasks with PDG/TOPs


Beginner Tutorials

Next steps

  • Running external programs

    How to wrap external functionality in a TOP node.

  • File tags

    Work items track the results created by their work. Each result is tagged with a type.

  • PDG Path Map

    The PDG Path Map manages the mapping of paths between file systems.

  • Feedback loops

    You can use for-each blocks to process looping, sequential chains of operations on work items.

  • Command servers

    Command blocks let you start up remote processes (such as Houdini or Maya instances), send the server commands, and shut down the server.

  • PDG Services

    PDG services manages pools of persistent Houdini sessions that can be used to reduce work item cooking time.

  • Integrating PDG with render farm schedulers

    How to use different schedulers to schedule and execute work.

  • Visualizing work item performance

    How to visualize the relative cook times (or file output sizes) of work items in the network.

  • Event handling

    You can register a Python function to handle events from a PDG node or graph

  • Tips and tricks

    Useful general information and best practices for working with TOPs.

  • Troubleshooting PDG scheduler issues on the farm

    Useful information to help you troubleshoot scheduling PDG work items on the farm.

  • PilotPDG

    Standalone application or limited license for working with PDG-specific workflows.


  • All TOPs nodes

    TOP nodes define a workflow where data is fed into the network, turned into work items and manipulated by different nodes. Many nodes represent external processes that can be run on the local machine or a server farm.

  • Processor Node Callbacks

    Processor nodes generate work items that can be executed by a scheduler

  • Partitioner Node Callbacks

    Partitioner nodes group multiple upstream work items into single partitions.

  • Scheduler Node Callbacks

    Scheduler nodes execute work items

  • Custom File Tags and Handlers

    PDG uses file tags to determine the type of an output file.

  • Python API

    The classes and functions in the Python pdg package for working with dependency graphs.

  • Job API

    Python API used by job scripts.

  • Utility API

    The classes and functions in the Python pdgutils package are intended for use both in PDG nodes and scripts as well as out-of-process job scripts.