Houdini 18.5 Nodes TOP nodes

Python Server Begin TOP node

On this page
Since 18.0

Command chains

Starts a python command server.

This node creates a work item that starts up a python server. It is the start of a block which ends with a Command Server End node.

Note that this is a simple python process, not a hython process, so the hou module is not available and no license is required. If you want to use a hython server, see the Houdini Server Begin node.

  • For the duration of the block, the server stays running and ready to accept commands.

  • You can use Command Send nodes within the block to communicate with the server. The work items within the block execute serially (instead of in parallel), one after another.

  • After the last work item in a session finishes, the Command Server End node shuts down the server.

  • (Optional) You can schedule multiple parallel servers or multiple serial session loops through the same server. See the Session Count from Upstream Items and Number of Sessions parameters.

For more information about using command chains, see command servers.

Implementing a server

The Python server is designed to allow you to implement your own simple servers. The Python server nodes support an XML/RPC based protocol that provides a remote function call interface by passing snippets of XML over HTTP. You can implement an XML/RPC server using a library included in the Python standard library (SimpleXMLRPCServer in Python 2.7 and xmlrpc.server in Python 3). If you plan to do this, you can use the pdgjob.genericrpc module as a model.

The Server Command parameter specifies the command line to start up the server. The command line runs a Python which may be your system Python or the Python which is bundled with Houdini. You can specify which Python to use in the command with __PDG_PYTHON__. By default, the command line runs __PDG_SCRIPTDIR__/genericrpc.py. To get started, you can base your own specialized server on that file.

TOP Attributes



The name of the shared server instance associated with the work item. In the case of the begin item, it’s the name of the server that the work item eventually create.



This attribute is inherited from the Block Begin Feedback node.

The loop iteration number within the set of work items associated with the loop. Since the iteration number at each level is preserved, this attribute can be an array of values when using nested feedback loops. The loop iteration value for the outer most loop is stored in loopiter[0], the next level is stored in loopiter[1], and so on.



This attribute is inherited from the Block Begin Feedback node.

Tracks which loop the work item is associated with. This attribute is relevant when generating multiple independent loops in the same Block Begin Feedback node, like when driving the Block Begin Feedback node with a Wedge node.



This attribute is inherited from the Block Begin Feedback node.

The total number of iterations in the loop.



Generate When

Determines when this node will generate work items. You should generally leave this set to "Automatic" unless you know the node requires a specific generation mode, or that the work items need to be generated dynamically.

All Upstream Items are Generated

This node will generate work items once all of the input nodes have generated their work items.

All Upstream Items are Cooked

This node will generate work items once all of the input nodes have cooked their work items.

Each Upstream Item is Cooked

This node will generate work items each time a work item in an input node is cooked.


The generation mode is selected based on the generation mode of the input nodes. If any of the input nodes are generating work items when their inputs cook, this node will be set to Each Upstream Item is Cooked. Otherwise, it will be set to All Upstream Items are Generated.

Session Count from Upstream Items

When on, the node creates a single server instance and serially loops through the block once for each incoming work item. When off, the node creates parallel server instances for each incoming work item. You can specify the number of session loops per instance with the Number of Sessions parameter below.

Number of Sessions

Specifies the number of times to loop through the session block for each server instance. These session loops execute serially one after the other.

This parameter is only available when Session Count from Upstream Items is off.

Copy Inputs For

Determines how input files are copied onto loop items. By default, upstream files are copied onto all input files. However, it is also possible to only copy input files onto the first iteration or none of the loop iterations.

No Iterations

Upstream input files are not copied to the outputs of any loop iteration items.

First Iteration

Upstream input files are copied to the output file list for only the first loop iteration

All Iterations

Upstream input files are copied to the output file list of all iterations.

Connect To Existing Server

When on, the work item connects to an existing server on a host address, rather than starting up a new server process. This can be useful when you want to connect to a central server such as an asset management system. However, you must make sure to serialize access to the server.

Server Name

Specifies the name for the server instance. This name must be unique across the TOP network.

Since this node can start up multiple servers at once, it usually appends the incoming work item index to the name. Whether or not this occurs is determined by the Append Index to Server Name parameter below.

If you have multiple nested command server blocks, the Command Send node can use this parameter to choose which server to interact with.

Append Index to Server Name

When on, appends the current work item index to the Server Name for each parallel instance of the server. For example, shareserversharedserver0, sharedserver1, and so on. This prevents any name conflicts between parallel instances.

Server Address

When Connect to existing server is on, the host address of the server to connect to. Set the Server port parameter to the port used by the existing server.

Server Port

When Connect To Existing Server is on, specifies the port to use to connect to an existing server. When Connect To Existing Server is off, specifies the TCP port number the server should bind to.

The default value 0 tells the system to dynamically select an unused port. This is the behavior you usually want. If you want to keep the ports in a certain range (and can guarantee the port numbers will be available), you can also use an expression like 9000 + @pdg_index. For information on how to pass this value to the server startup script, see the Server Command parameter.


On UNIX systems, ports 0-1023 can only be bound by a superuser. As such, you most likely need to choose a port above 1023.

Load Timeout

Specifies the timeout used when performing an initial verification that the shared server instance can be reached. When this timeout passes without a successful communication, the work item for that server is marked as failed.

Server Command

Specifies the command to run to start the server. This command line should work when running on a remote server. As such, you should put the start script in a known location on your shared network file system.

Loop Attribute Names

These parameters customize the names of the work item attributes created by this node.


File Dependencies

A list of files that should be copied to the PDG working directory before the first work item in this node is executed. This can be used to ensure that supporting files like digital assets and custom scripts are available for the work item job.

The specified paths can be absolute or relative to HOUDINI_PATH.


TOP Scheduler Override

This parameter overrides the TOP scheduler for this node.

Work Item Priority

This parameter determines how the current scheduler prioritizes the work items in this node.

Inherit From Upstream Item

The work items inherit their priority from their parent items. If a work item has no parent, its priority is set to 0.

Custom Expression

The work item priority is set to the value of Priority Expression.

Node Defines Priority

The work item priority is set based on the node’s own internal priority calculations.

This option is only available on the Python Processor TOP, ROP Fetch TOP, and ROP Output TOP nodes. These nodes define their own prioritization schemes that are implemented in their node logic.

Priority Expression

This parameter specifies an expression for work item priority. The expression is evaluated for each work item in the node.

This parameter is only available when Work Item Priority is set to Custom Expression.

See also

TOP nodes