Houdini 18.0 Nodes TOP nodes

Maya Server Begin TOP node

Starts a persistent Maya command server

On this page

This node is a specialized version of the generic Python Server node that can be used to create Maya Server work items. Session work items in this node are associated with a long running Maya process, and can be used to run MEL or python code using the Command Send node.

See command servers for additional details on the use of command chains.

TOP Attributes



The sharedserver attribute specifies the name of the shared Maya instance associated with the work item. In the case of the begin item, it’s the name of the server that the work item will eventually create.



This attribute is inherited from the Feedback Begin node.

The loop iteration number, within the set of work items associated with the loop. This attribute can be an array of values when using nested feedback loops, since the iteration number at each level is preserved. The loop iteration value for the outer most loop is stored in loopiter[0], the next level is stored in loopiter[1], and so on.



This attribute is inherited from the Feedback Begin node.

Tracks which loop the work item is associated with. This attribute is relevant when generating multiple independent loops in the same feedback begin node, for example by driving the feedback begin node with a Wedge node.



This attribute is inherited from the Feedback Begin node.

The total number of iterations in the loop.



Work Item Generation

Whether this node generates static or dynamic work items. You should generally leave this set to "Automatic" unless you know the node’s work items can be computed statically, or that they need to be generated dynamically.


This node always creates dynamic work items: it waits until the upstream work items are known, and generates new work items from the upstream work items.


This node always creates static work items: it creates the number of work items it thinks it needs based on the parameters (and any upstream static items) before the network runs.


If the input is static (a static processor, or a partitioner with only static inputs, or a mapper), this node generates static work items, otherwise it generates dynamic work items.

Session Count from Upstream Items

When this toggle is enabled, the node will create a single server work item and a session with the server for each upstream work item. Otherwise, a server item will be created for each upstream work item.

Number of Sessions

The number of sessions to create with the server. Each session work item will cook in serial with other sessions using the same server. The chain of work items starting from this session item down to the Command Server End node will cook to completion before starting the next session.

Copy Inputs For

Determines how input files should be copied onto loop items. By default, upstream files are copied onto all input files, however it’s also possible to only copy input files onto the first iteration or none of the loop iterations.

No Iterations

Upstream input files are not copied to the outputs of any loop iteration items

First Iteration

Upstream input files are copied to the output file list only for the first loop iteration

All Iterations

Upstream input files are copied to the output file list of all iterations.

Server Name

The name of the shared server, which is used to access it in Command Send nodes using the server. This name must be unique, which can be achieved by enabling the Suffix with Index parm.

Append Index to Server Name

Appends the work item index to the end of the Shared Server Name when registering the server. This is useful when generating multiple shared server work items in this node.

Server Port

The TCP port number the server should bind to (when Connect to existing server if off), or the port to use to connect to an existing server (when Connect to existing server is on). The default value 0 tells the system to dynamically choose an unused port, which is usually what you want. If you want to keep the ports in a certain range (and can guarantee the port numbers will be available), you can use an expression here such as 9000 + @pdg_index.

Connect to Existing Server

When this toggle is enabled, the work item will connect to an existing server rather than spawning a new one.

Server Address

The existing server address, when Connect to Existing Server is enabled.

Load Timeout

The timeout used when performing an initial verification that the shared server instance can be reached. When this timeout passes without a successful communication, the work item for that server will be marked as failed

Maya Python Executable

The path to the mayapy executable that should be started. The default value is $PDG_MAYAPY, which expects that the environment variable will be defined when the work item executes. However this can be set to an absolute path if it’s known.

Loop Attribute Names

These parameters can be used to customize the names of the work item attributes created by this node.


The name of the attribute that stores the work item’s iteration number.

Number of Iterations

The name of the attribute that stores the total iteration count.

Loop Number

The name of the attribute that stores the loop number.


TOP Scheduler Override

This parameter overrides the TOP scheduler for this node.

Work Item Priority

This parameter determines how the current scheduler prioritizes the work items in this node.

Inherit From Upstream Item

The work items inherit their priority from their parent items. If a work item has no parent, its priority is set to 0.

Custom Expression

The work item priority is set to the value of Priority Expression.

Node Defines Priority

The work item priority is set based on the node’s own internal priority calculations.

This option is only available on the Python Processor TOP, ROP Fetch TOP, and ROP Output TOP nodes. These nodes define their own prioritization schemes that are implemented in their node logic.

Priority Expression

This parameter is only available when Work Item Priority is set to Custom Expression.

This parameter specifies an expression for work item priority. The expression is evaluated for each work item in the node.

See also

TOP nodes