On this page | |
Since | 18.0 |
Command chains ¶
Starts a python command server.
This node creates a work item that starts up a python server. It is the start of a block which ends with a
Command Server End node.
Note that this is a simple python process, not a hython process, so the hou
module is not available and no license is required. If you want to use a hython server, see the
Houdini Server Begin node.
-
For the duration of the block, the server stays running and ready to accept commands.
-
You can use
Command Send nodes within the block to communicate with the server. The work items within the block execute serially (instead of in parallel), one after another.
-
After the last work item in a session finishes, the Command Server End node shuts down the server.
-
(Optional) You can schedule multiple parallel servers or multiple serial session loops through the same server. See the Session Count from Upstream Items and Number of Sessions parameters.
For more information about using command chains, see command servers.
Implementing a server ¶
The Python server is designed to allow you to implement your own simple servers. The Python server nodes support an XML/RPC based protocol that provides a remote function call interface by passing snippets of XML over HTTP. You can implement an XML/RPC server using a library included in the Python standard
library (SimpleXMLRPCServer
in Python 2.7 and xmlrpc.server
in Python 3). If you plan to do this, you can use the pdgjob.genericrpc
module as a model.
The Server Command parameter specifies the command line to start up the server. The command line runs a Python which may be your system Python or the Python which is bundled with Houdini. You can specify which Python to use in the command with __PDG_PYTHON__
. By default, the command line runs __PDG_SCRIPTDIR__/genericrpc.py
. To get started, you can base your own specialized server on that file.
TOP Attributes ¶
|
str |
The name of the shared server instance associated with the work item. In the case of the begin item, it’s the name of the server that the work item eventually create. |
|
[int] |
This attribute is inherited from the The loop iteration number within the set of work items associated with
the loop. Since the iteration number at each level is preserved, this attribute can be an array of values when using nested feedback loops. The loop
iteration value for the outer most loop is stored in |
|
int |
This attribute is inherited from the Tracks which loop the work item is associated with. This attribute is relevant when generating multiple independent loops in the same Block Begin Feedback node, like when driving the Block Begin Feedback node with a |
|
int |
This attribute is inherited from the The total number of iterations in the loop. |
Parameters ¶
Node ¶
Generate When
Determines when this node will generate work items. You should generally leave this set to “Automatic” unless you know the node requires a specific generation mode, or that the work items need to be generated dynamically.
All Upstream Items are Generated
This node will generate work items once all of the input nodes have generated their work items.
All Upstream Items are Cooked
This node will generate work items once all of the input nodes have cooked their work items.
Each Upstream Item is Cooked
This node will generate work items each time a work item in an input node is cooked.
Automatic
The generation mode is selected based on the generation mode of the input nodes. If any of the input nodes are generating work items when their inputs cook, this node will be set to Each Upstream Item is Cooked. Otherwise, it will be set to All Upstream Items are Generated.
Session Count from Upstream Items
When on, the node creates a single server instance and serially loops through the block once for each incoming work item. When off, the node creates parallel server instances for each incoming work item. You can specify the number of session loops per instance with the Number of Sessions parameter below.
Number of Sessions
Specifies the number of times to loop through the session block for each server instance. These session loops execute serially one after the other.
This parameter is only available when Session Count from Upstream Items is off.
Copy Inputs For
Determines how input files are copied onto loop items. By default, upstream files are copied onto all input files. However, it is also possible to only copy input files onto the first iteration or none of the loop iterations.
No Iterations
Upstream input files are not copied to the outputs of any loop iteration items.
First Iteration
Upstream input files are copied to the output file list for only the first loop iteration
All Iterations
Upstream input files are copied to the output file list of all iterations.
Connect To Existing Server
When on, the work item connects to an existing server on a host address, rather than starting up a new server process. This can be useful when you want to connect to a central server such as an asset management system. However, you must make sure to serialize access to the server.
Server Address
When Connect to existing server is on, the host address of the server to connect to. Set the Server port parameter to the port used by the existing server.
Server Port
When Connect To Existing Server is on, specifies the port to use to connect to an existing server. When Connect To Existing Server is off, specifies the TCP port number the server should bind to.
The default value 0
tells the system to dynamically select an unused port. This is the behavior you usually want. If you want to keep the ports in a certain range (and can guarantee the port numbers will be available), you can also use an expression like 9000 + @pdg_index
. For information on how to pass this value to the server startup script, see the Server Command parameter.
Note
On UNIX systems, ports 0-1023 can only be bound by a superuser. As such, you most likely need to choose a port above 1023.
Load Timeout
Specifies the timeout used when performing an initial verification that the shared server instance can be reached. When this timeout passes without a successful communication, the work item for that server is marked as failed.
Server Command
Specifies the command to run to start the server. This command line should work when running on a remote server. As such, you should put the start script in a known location on your shared network file system.
Feedback Attributes
When on, the specified attributes are copied from the end of each iteration onto the corresponding work item at the beginning of the next iteration. This occurs immediately before the starting work item for the next iteration cooks.
Tip
The attribute(s) to feedback can be specified as a space-separated list or by using the attribute pattern syntax. For more information on how to write attribute patterns, see Attribute Pattern Syntax.
Feedback Output Files
When on, the output files from each iteration are copied onto the corresponding work item at the beginning of the next loop iteration. The files are added as outputs of that work item, which makes them available as inputs to work items inside the loop.
These parameters customize the names of the work item attributes created by this node.
Files ¶
File Dependencies
A list of files that should be copied to the PDG working directory before the first work item in this node is executed. This can be used to ensure that supporting files like digital assets and custom scripts are available for the work item job.
The specified paths can be absolute or relative to HOUDINI_PATH.
Schedulers ¶
TOP Scheduler Override
This parameter overrides the TOP scheduler for this node.
Schedule When
When enabled, this parameter can be used to specify an expression that determines which work items from the node should be scheduled. If the expression returns zero for a given work item, that work item will immediately be marked as cooked instead of being queued with a scheduler. If the expression returns a non-zero value, the work item is scheduled normally.
Work Item Label
Determines how the node should label its work items. This parameter allows you to assign non-unique label strings to your work items which are then used to identify the work items in the attribute panel, task bar, and scheduler job names.
Use Default Label
The work items in this node will use the default label from the TOP network, or have no label if the default is unset.
Inherit From Upstream Item
The work items inherit their labels from their parent work items.
Custom Expression
The work item label is set to the Label Expression custom expression which is evaluated for each item.
Node Defines Label
The work item label is defined in the node’s internal logic.
Label Expression
When on, this parameter specifies a custom label for work items created by this node. The parameter can be an expression that includes references to work item attributes or built-in properties. For example, $OS: @pdg_frame
will set the label of each work item based on its frame value.
Work Item Priority
This parameter determines how the current scheduler prioritizes the work items in this node.
Inherit From Upstream Item
The work items inherit their priority from their parent items. If a work item has no parent, its priority is set to 0.
Custom Expression
The work item priority is set to the value of Priority Expression.
Node Defines Priority
The work item priority is set based on the node’s own internal priority calculations.
This option is only available on the
Python Processor TOP,
ROP Fetch TOP, and ROP Output TOP nodes. These nodes define their own prioritization schemes that are implemented in their node logic.
Priority Expression
This parameter specifies an expression for work item priority. The expression is evaluated for each work item in the node.
This parameter is only available when Work Item Priority is set to Custom Expression.
See also |