Houdini 18.0 タスクを実行する方法

Partitioner Node Callbacks

Partitioner系ノードは複数の上流ワークアイテムを単一パーティションにグループ化します。

On this page

概要

Partitioner nodes are the mechanism that a PDG graph uses to group together multiple upstream work items. These groups of work items produced by the node are called partitions, which are themselves a special type of work item that directly depends on the items in that partition. Partitions also inherit their attributes and output file list from those work items. The Advanced tab of each partitioner node has parameters for controlling how upstream attributes are copied onto the partition.

PDG includes a number of a built-in partitioner nodes that can be used to group work items by properties such as their attribute value, index, frame or node topology. The Partition by Expression or Python Partitioner nodes can be used to write custom partitioning logic for cases that aren’t handled by the shipped nodes. It is also possible to write your own custom partitioner node as a standalone Python script.

Much like a processor node, a partitioner can be either Static or Dynamic. Static partitioners perform their grouping logic during the static cook pre-pass. The input to a static partitioner is the list of all static work items across all input connections. If a static partitioner has an input node that is dynamic, it skips that node and traverses upwards until it finds a node with static work items. Dynamic partitioners evaluate their grouping logic once all input nodes have generated their work items. Consequently, this means that a dynamic partitioner has to wait for all nodes two levels higher to be cooked before partitioning its input work items.

Partition Attributes

Partitioner nodes are currently not able to add custom attributes onto partitions. Partitions will inherit their attributes and output files from the work items in the partition, based on the parameters on the Advanced tab of the node. If the Merge Input Attributes parameter is off the partitions will not inherit any attributes, but will still have all of output files from the items in the partition copied to its own output list. If the merge parameter is enabled attributes from the work items are merged into the partitions' attributes. The documentation for each partitioner node includes more details on the purpose of each parameter, for example the Python Partitioner.

Merging works by first sorting the work items based on the sort parameters on the partitioner node. PDG then iterates over the sorted items and copies attribute values from them to the partition. If an attribute already exists on the partition it is ignored. For example, if all of the work items have the same set of attribute names then only the attribute values from the first work item in the sorted list will be copied onto the partition. If the second work item in the sorted list has an attribute that the first item does not have, that attribute will also get copied, and so on. The sorting order in the merge process also determines the order of the output files on the partition.

Node Callbacks

Partitioner nodes have a single callback method that receives the list of upstream work items as an input. The callback function is expected to return a pdg.result value that indicates that status of the partitioning operation.

onPartition(self, partition_holder, work_items)pdg.result

This callback is evaluated once for each partitioner during the cook of a PDG graph. If the partitioner is static the callback is run during the static pre-pass, otherwise it is evaluated during the cook once all input work items have been generated. The list of upstream work items eligible for partitioning is passed to the function through the work_items argument. The partition_holder argument is an instance of the pdg.PartitionHolder class and is used to create partitions.

Each partition is defined using a unique numeric value supplied by the onPartition function. Work items are added by calling the addItemToPartition function with the work item itself and the partition number:

# Add each work item to its own unique partition
partition_holder.addItemToPartition(work_items[0], 0)
partition_holder.addItemToPartition(work_items[1], 1)

# Add both work items to a third, common partition
partition_holder.addItemToPartition(work_items[0], 2)
partition_holder.addItemToPartition(work_items[1], 2)

It is possible to add a work item to multiple partitions or none of the partitions. Sometimes a node may wish to add a work item to all partitions before it knows how many partitions will be created. The addItemToAllPartitions method will mark a work item as belonging to all partitions, include ones that are added after that call is made.

A work item can also be marked as a "requirement" for the partition. If that work item is deleted the entire partition is also deleted, even if other work items in the partition still exist. For example, the Partition by Combination uses this behavior when creating partitions from pairs of upstream work items. If one of the work items in a pairing is deleted, the partition is no longer valid because it no longer represents a pair. The following code is a possible implementation of an onPartition function that forms a partition for each unique pair of input work items:

partition_index=0

# Outer loop over the work itmes
for index1, item1 in enumerate(work_items):

    # Inner loop over the work items
    for index2, item2 in enumerate(work_items):

        # We want to have only one partition for each pair, no matter what
        # the order. If we don't have this check we'll get a partition for
        # both (a,b) and for (b,a).
        if index2 <= index1:
            continue

        # Add both items to the next available partition, and flag the items
        # as required
        partition_holder.addItemToPartition(item1, partition_index, True)
        partition_holder.addItemToPartition(item2, partition_index, True)

        partition_index += 1

タスクを実行する方法

基本

次のステップ

  • 外部のプログラムを実行する方法

    TOPノードで外部機能をラップする方法。

  • ファイルタグ

    ワークアイテムは、そのワークによって生成された"結果"を追跡します。各結果には、そのタイプのタグが付きます。

  • フィードバックループ

    For-Eachブロックを使用することで、ワークアイテムに対して一連のオペレーションをループで処理することができます。

  • コマンドサーバー

    コマンドブロックは、リモートプロセス(例えば、HoudiniやMayaのインスタンス)を起動したり、サーバーコマンドを送信したり、サーバーをシャットダウンすることができます。

  • PDG Service Manager

    PDG Service Managerは、ワークアイテムのクック時間を短くするために使用される持続型Houdiniセッションのプールを管理します。

  • PDGとレンダーファームスケジューラの統合

    異なるスケジューラを使って、ワークのスケジュールを組んで実行する方法。

  • ワークアイテムのパフォーマンスの可視化

    ネットワーク内のワークアイテムの相対的なクック時間(またはファイル出力サイズ)を可視化する方法。

  • Event Handling

    PDGノードまたはグラフからイベントを制御するためのPython関数を登録することができます。

  • Tipsとテクニック

    TOPsを扱う上で役立つ一般情報と上手な使い方。

リファレンス

  • すべてのTOPsノード

    TOPノードは、データをネットワークに送り込んで"ワークアイテム"に変換し、色々なノードでそれを制御するワークフローを定義します。たいていのノードは、ローカルマシンまたはサーバーファーム上で実行可能な外部プロセスを表現しています。

  • Processor Node Callbacks

    Processor系ノードはスケジューラで実行可能なワークアイテムを生成します。

  • Partitioner Node Callbacks

    Partitioner系ノードは複数の上流ワークアイテムを単一パーティションにグループ化します。

  • Scheduler Node Callbacks

    Scheduler系ノードはワークアイテムを実行します。

  • Python API

    ディペンデンシーグラフを扱うためのPython PDGパッケージのクラスと関数。