Houdini 20.0 Executing tasks with PDG/TOPs

Partitioner Node Callbacks

Partitioner nodes group multiple upstream work items into single partitions.

Node Callbacks

Partitioner nodes have a single callback method that receives the list of upstream work items as an input. The callback function is expected to return a pdg.result value that indicates that status of the partitioning operation.

onPartition(self, partition_holder, work_items)pdg.result

This callback is evaluated once for each partitioner during the cook of a PDG graph, or once for each unique attribute value (if Split by Attribute is turned on). If the partitioner is static, the callback is run during the static pre-pass. Otherwise, it is evaluated during the cook after all input work items have been generated. The list of upstream work items eligible for partitioning is passed to the function through the work_items argument. The partition_holder argument is an instance of the pdg.PartitionHolder class and is used to create partitions.

Each partition is defined using a unique numeric value supplied by the onPartition function. Work items are added by calling the addItemToPartition function with the work item itself and the partition number:

# Add each work item to its own unique partition
partition_holder.addItemToPartition(work_items[0], 0)
partition_holder.addItemToPartition(work_items[1], 1)

# Add both work items to a third, common partition
partition_holder.addItemToPartition(work_items[0], 2)
partition_holder.addItemToPartition(work_items[1], 2)

You can add a work item to multiple partitions or none of the partitions. Sometimes a node may wish to add a work item to all partitions before it knows how many partitions will be created. The addItemToAllPartitions method marks a work item as belonging to all partitions and includes ones that are added after that call is made.

You can also mark a work item as a requirement for the partition. If that work item is deleted, the entire partition is also deleted even if other work items in the partition still exist. For example, the Partition by Combination uses this behavior when creating partitions from pairs of upstream work items. If one of the work items in a pairing is deleted, the partition is no longer valid because it no longer represents a pair.

The following code is a possible implementation of an onPartition function that forms a partition for each unique pair of input work items:

partition_index=0

# Outer loop over the work items
for index1, item1 in enumerate(work_items):

# Inner loop over the work items
    for index2, item2 in enumerate(work_items):

        # We want to have only one partition for each pair, no matter what
        # the order. If we don't have this check we'll get a partition for
        # both (a,b) and for (b,a).
        if index2 <= index1:
                continue

        # Add both items to the next available partition, and flag the items
        # as required
        partition_holder.addItemToPartition(item1, partition_index, True)
        partition_holder.addItemToPartition(item2, partition_index, True)

        partition_index += 1

Executing tasks with PDG/TOPs

Basics

Beginner Tutorials

Next steps

  • Running external programs

    How to wrap external functionality in a TOP node.

  • File tags

    Work items track the results created by their work. Each result is tagged with a type.

  • PDG Path Map

    The PDG Path Map manages the mapping of paths between file systems.

  • Feedback loops

    You can use for-each blocks to process looping, sequential chains of operations on work items.

  • Service Blocks

    Services blocks let you define a section of work items that should run using a shared Service process

  • PDG Services

    PDG services manages pools of persistent Houdini sessions that can be used to reduce work item cooking time.

  • Integrating PDG with render farm schedulers

    How to use different schedulers to schedule and execute work.

  • Visualizing work item performance

    How to visualize the relative cook times (or file output sizes) of work items in the network.

  • Event handling

    You can register a Python function to handle events from a PDG node or graph

  • Tips and tricks

    Useful general information and best practices for working with TOPs.

  • Troubleshooting PDG scheduler issues on the farm

    Useful information to help you troubleshoot scheduling PDG work items on the farm.

  • PilotPDG

    Standalone application or limited license for working with PDG-specific workflows.

Reference

  • All TOPs nodes

    TOP nodes define a workflow where data is fed into the network, turned into work items and manipulated by different nodes. Many nodes represent external processes that can be run on the local machine or a server farm.

  • Processor Node Callbacks

    Processor nodes generate work items that can be executed by a scheduler

  • Partitioner Node Callbacks

    Partitioner nodes group multiple upstream work items into single partitions.

  • Scheduler Node Callbacks

    Scheduler nodes execute work items

  • Custom File Tags and Handlers

    PDG uses file tags to determine the type of an output file.

  • Python API

    The classes and functions in the Python pdg package for working with dependency graphs.

  • Job API

    Python API used by job scripts.

  • Utility API

    The classes and functions in the Python pdgutils package are intended for use both in PDG nodes and scripts as well as out-of-process job scripts.