HDK
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Macros Groups Pages
Maximizing LOP Performance

Evaluating Parameters

As mentioned in LOP Cook Methods, LOP nodes should evaluate parameters early in the cook method. In particular, parameters should never be evaluated when there are any lock objects on any LOP node stages. The reason for this is that users can set parameters to evaluate arbitrary expressions. If an expression looks that the stage generated by a LOP node, evaluating that expression requires locking that LOP node's stage. If the node locked in the cook method and the node that needs to be locked to evaluate the epxression share an underlying USD stage, this is a lock conflict which will incur an expensive recalculation.

It is also possible to lock and unlock the editable stage several times during the cook method, and it is safe to evaluate parameters between these locked sections. But there is a relatievly high cost to locking and unlocking, especially if the parameters have expressoins that reference data from other LOP nodes.

What To Avoid

One of the most expensive operations in USD is adding or removing a sublayer. Doing so forces a recomposition of the entire stage, and is likely to invalidate and re-sync every hydra primitive. This applies whether the sublayer is being added directly to the root layer, or to an existing sublayer of the root layer. However adding a sublayer to a layer that is referenced onto the stage will only invalidate those prims under the referencing location (and so is likely to be much less expensive).

LOPs works very hard to avoid adding and removing sublayers for this reason. It is worth noting that once you are adding or removing sublayers, adding or removing multiple sublayers is no worse than one. So LOPs always keeps a supply of "placeholder" layers on the root layer of the stage. These placeholder layers are always the N strongest sublayers on the stage, ready to be used as the Active Layer when a LOP node requests the addition of a new LOP Layer to the stage. When the last placeholder layer gets consumed, a number of new placeholder layers are added (rather than just adding the one new active layer requested by the LOP). This mechanism should be completely invisible to the user, and will be invisible to a LOP node developer as well as long as they don't iterate through every sublayer on the root layer. In this case, the HUSDisLayerPlaceholder() function can be used to identify an unused placeholder layer.

Active Layer Size

When cooking a LOP node, releasing a write lock causes the content of the active layer to be stored with the LOP node. This caching of the cooked results is the per-node storage cost of each node in the chain. Since a LOP node that isn't forced to add a new LOP Layer will simply edit a copy of the input LOP's active layer, the cost of stashing the active layer will grow with each node in a chain. This is not normally a concern, but does indicate that it might be worthwhile for a LOP node that authors an active layer with many thousands or opinions to start a new active layer, forcing the next node in the chain to edit this new empty active layer instead of writing into (and caching) the previous highly populated layer.

Starting a new active layer obviously has implications to the final USD output generated by a ROP. However the default behavior of the USD ROP to "Flatten Implicit Layers" will cause unnamed LOP Layers to be merged before writing to disk, resulting in the same layer content as if a single LOP Layer is authored by a chain of nodes.

Checkpoints

LOP node checkpointing is a feature that can be used inside a LOP cook method. It is useful in the cook methods of LOP nodes that perform a series of sequential or separable operations, and where it is expected that the node will be cooked repeatedly as the user alters its parameters or referenced data sources. Checkpointing simply saves the current state of the active layer with a name, and allows that named checkpoint to be restored. This mechanism can of course be implemented internally by any specific LOP node, but the advantage of the built in checkpointing system is that it automatically clears the checkpoints if the incoming LOP stage is dirtied (which in most cases means the validity of the checkpoint can't be assured).

To create a checkpoint, call LOP_Node::createCheckpoint, supplying a lock object for the data handle to be stashed, and a name that can be used to retrieve the checkpoint later. Using a checkpoint generally involves testing if a particular named checkpoint exists using LOP_Node::getAvailableCheckpoints or LOP_Node::restoreCheckpoint. If the checkpoint exists, the cook method then skips some operations known to already be accurately reflected in the checkpointed active layer.

Houdini ships with a few LOP nodes that use checkpointing. The Material Library LOP performs a checkpoint at the end of it's cook operation. The next time it cooks, if the checkpoint exists, it can re-translate only those VOP nodes which have changed since the LOP last cooked, rather than retranslating the entire VOP network. This can be an enormous time savings for VOP networks with hundreds or thousads of nodes.

The Stage Manager LOP uses checkpoints somewhat differently. It flattens the input node's layers, then performing a sequence of many, many operations on the resulting layer. So it takes a checkpoint of the flattened stage, and then it makes a second checkpoint after applying most of its sequence of operations. When the user is interacting with the stage manager UI, 99% of the time they are adding a new operation, or altering one of the last few operations. So on each cook, the stage manager can restore the second checkpoint and just apply the last ten or twenty operations. It also monitors itself so that if any parameters included in the second checkpoint are change, it deletes that second checkpoint, but can still use the first flattened stage checkpoint. Only if the input node changes does the stage manager need to cook from scratch.