Houdini 18.0 Nodes TOP nodes

Python Server Begin TOP node

Starts a python command server.

On this page

Command chains

This node creates a work item that starts up a python server. It is the start of a block which ends with a Command Server End node. Note that this is a simple python process, not a hython process, so the hou module is not available, and no license is required. See Houdini Server Begin for a hython server.

  • For the duration of the block, the server stays running and ready to accept commands.

  • Within the block you can use Command Send nodes to communicate with the server. The work items within the block execute serially (instead of in parallel), one after another.

  • After the last work item in a session finishes, the Command Server End node shuts down the server.

  • You can optionally schedule multiple parallel servers, or multiple serial session "loops" through the same server. See the Session count from upstream items and Number of sessions parameters.

See command servers for additional details on the use of command chains.

Implementing a server

The "python" server is designed to allow you to implement your own simple servers. The python server nodes support an XML/RPC based protocol that provides a "remote function call" interface by passing snippets of XML over HTTP. You can implement an XML/RPC server using a library included in the Python standard library (SimpleXMLRPCServer in Python 2.7, xmlrpc.server in Python 3). The pdgjob.genericrpc module can be used as a model for this purpose.

The Server command parameter specifies the command line to start up the server. See the parameter help for information on passing the hostname and port number. By default it will run python, which may be your system python or the python which is bundled with Houdini, this 'default' python is indicated in the command with __PDG_PYTHON__. By default it runs "PDG_SCRIPTDIR/genericrpc.py". You can base your own specialized server on that file to get started.

TOP Attributes



The sharedserver attribute specifies the name of the shared server instance associated with the work item. In the case of the begin item, it’s the name of the server that the work item will eventually create.



This attribute is inherited from the Feedback Begin node.

The loop iteration number, within the set of work items associated with the loop. This attribute can be an array of values when using nested feedback loops, since the iteration number at each level is preserved. The loop iteration value for the outer most loop is stored in loopiter[0], the next level is stored in loopiter[1], and so on.



This attribute is inherited from the Feedback Begin node.

Tracks which loop the work item is associated with. This attribute is relevant when generating multiple independent loops in the same feedback begin node, for example by driving the feedback begin node with a Wedge node.



This attribute is inherited from the Feedback Begin node.

The total number of iterations in the loop.



Work Item Generation

Whether this node generates static or dynamic work items. You should generally leave this set to "Automatic" unless you know the node’s work items can be computed statically, or that they need to be generated dynamically.


This node always creates dynamic work items: it waits until the upstream work items are known, and generates new work items from the upstream work items.


This node always creates static work items: it creates the number of work items it thinks it needs based on the parameters (and any upstream static items) before the network runs.


If the input is static (a static processor, or a partitioner with only static inputs, or a mapper), this node generates static work items, otherwise it generates dynamic work items.

Session Count from Upstream Items

If this is on, the node creates a single server instance, and serially loops through the block once for each incoming work item. If this is off, the node creates parallel server instances for each incoming work item (and you can specify the number of session loops per instance using the Number of sessions parameter).

Number of Sessions

When Session count from upstream items is off, this is the number of times to loop through the session block for each server instance. These session loops execute serially, one after the other.

Copy Inputs For

Determines how input files should be copied onto loop items. By default, upstream files are copied onto all input files, however it’s also possible to only copy input files onto the first iteration or none of the loop iterations.

No Iterations

Upstream input files are not copied to the outputs of any loop iteration items

First Iteration

Upstream input files are copied to the output file list only for the first loop iteration

All Iterations

Upstream input files are copied to the output file list of all iterations.

Server Name

A name for the server instance. The Command Send node can use this choose what server to interact with if you nest multiple command server blocks. The name should be unique across the TOP network. Since this node can start up multiple servers at once, it usually appends the incoming work item index to the name, controlled by the Append index to server name checkbox.

Append Index to Server Name

Appends the current work item index to the Shared server name for each parallel instance of the server (for example, shareserversharedserver0, sharedserver1, and so on). This prevents name conflicts between parallel instances.

Server Port

The TCP port number the server should bind to (when Connect to existing server if off), or the port to use to connect to an existing server (when Connect to existing server is on). The default value 0 tells the system to dynamically choose an unused port, which is usually what you want. If you want to keep the ports in a certain range (and can guarantee the port numbers will be available), you can use an expression here such as 9000 + @pdg_index. See the Server command parameter help for how to pass this value to the server startup script.

Note that on UNIX systems ports 0-1023 can only be bound by a superuser, so you will probably need to choose a port above 1023.

Connect to Existing Server

When this is on, the work item will connect to an existing server on a host address, rather than starting up a new server process. This can be useful to connect to a central server such as an asset management system, however you must be sure to serialize access to the server.

Server Address

When Connect to existing server is on, the host address of the server to connect to. Set the Server port parameter to the port used by the existing server.

Load Timeout

The timeout used when performing an initial verification that the shared server instance can be reached. When this timeout passes without a successful communication, the work item for that server will be marked as failed

Server Command

The command to run to start the server. This command line should work when run on a remote server, so you probably want to put the start script in a known location on the shared network filesystem.

Loop Attribute Names

These parameters can be used to customize the names of the work item attributes created by this node.


The name of the attribute that stores the work item’s iteration number.

Number of Iterations

The name of the attribute that stores the total iteration count.

Loop Number

The name of the attribute that stores the loop number.


TOP Scheduler Override

This parameter overrides the TOP scheduler for this node.

Work Item Priority

This parameter determines how the current scheduler prioritizes the work items in this node.

Inherit From Upstream Item

The work items inherit their priority from their parent items. If a work item has no parent, its priority is set to 0.

Custom Expression

The work item priority is set to the value of Priority Expression.

Node Defines Priority

The work item priority is set based on the node’s own internal priority calculations.

This option is only available on the Python Processor TOP, ROP Fetch TOP, and ROP Output TOP nodes. These nodes define their own prioritization schemes that are implemented in their node logic.

Priority Expression

This parameter is only available when Work Item Priority is set to Custom Expression.

This parameter specifies an expression for work item priority. The expression is evaluated for each work item in the node.

See also

TOP nodes