HDK
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Macros Groups Pages
COP Concepts

Retrieving a Cooked Image

Data flow in COPs is somewhat more complicated than in other nodes. It still follows the same recursive input cooking scheme as other nodes, but the implementation is quite a bit different. COPs is optimized for a multithreaded environment, so most of the cooking foundation reflects this.

COPs process image data, which is divided up into frames, planes and components. A frame represents all the image data at a specific time. Planes divide the various types of image data into vectors and scalars, such as color, alpha, normals, depth and masks. Vector planes are further divided into components. Each plane may have its own data type, such as 8b or 32b floating point, which all its components share.

Each COP processes a sequence of images. A sequence has a fixed start and end frame, as well as a constant frame rate. All the images in the sequence share the same resolution and plane composition. These attributes cannot be animated.

A sequence can have extend conditions which describe how the sequence should behavior when cooked outside its frame range. These conditions are black, hold, hold for N frames, repeat and mirror. There is a special type of sequence called a "Still Image" which can be used for static background plates. It is a single image sequence that is available at any time.

HDK_DataFlow_COP2_data.png
Data processed by COPs is divided into frames, planes and components. Frame size is constant, but the canvas can vary per plane and frame.

The resolution of the sequence determines the size of the frame bounds, which is always constant and cannot be animated. However, each plane's image has a canvas, which is where the image data actually exists. The canvas can be contained by, surround or be disjoint with the actual frame bounds. For example, a small garbage matte in the corner of an image may only have a small canvas which contains it, while blurring an image could increase its canvas beyond the frame bounds to accommodate the blur's falloff. The canvas can be a different size for different frames and for different planes. The bottom left corner of the frame bounds is always (0,0), and the canvas bounds are defined relative to that origin.

When writing images, the canvas outside the frame bounds is normally cropped away, though it can be optionally written to an image format that supports a data window, if the user desires.

HDK_DataFlow_COP2_large_canvas1.png
Relationship between the frame, canvas, and tiles.

The canvas is further divided into tiles for processing. By default, these tiles have a size of 200x200. Tiles at the top and right edges of the canvas may be partial tiles, if the canvas does not tile evening (which is normally the case).

COP nodes do not own image data - all tiles belong to a global COP cook cache. This allows for better memory optimization of large image sequences, and improves interactivity.

COP processing

Unlike most other nodes which cook all the data they contain in one cook, COPs cooks only what is required to satisfy the required output image. So, if only color is required from a sequence that contains color, alpha and depth, only color will be cooked. This behavior also extends to areas of the image. The canvas can grow quite large after many transforms and filters have been applied, but if an operation only needs a subregion of that canvas (such as a crop or scale), only that subregion will be cooked.

HDK_DataFlow_COP2_large_canvas_2.png
The tiles that need to be cooked for a small required area.

Compositing occurs in three distinct stages:

  • Sequencing - All the characteristics of the image sequence are determined, such as frame range, resolution and plane composition.
  • Scheduling - Based on the output requested, a dependency tree is built to determine which tiles need to be cooked across the network and the order in which to cook them. The canvas sizes of all nodes is determined, as well as the required areas for the output being requested.
  • Compositing - The needed image data is computed to produce the output image.

The first stage works like most other nodes; the inputs are opened and the sequence information is computed recursively until all data is up to date. The sequence information is stored in a TIL_Sequence which resides inside the COP2_Node (COP2_Node::mySequence). This phase works the same regardless of which output planes and area is requested. This occurs in the virtual method COP2_Node::cookSequenceInfo().

The second stage looks at the output image that is requested for the cook. It ensures that all canvas sizes are computed and builds all the required areas for each node. It then determines which required areas are currently cached and which areas need to be cooked. Each node being cooked is presented with the area required for the cook and asked to determine which areas it needs from its inputs. This occurs in the virtual method COP2_Node::getInputDependenciesForOutputArea(). The input image data may come from a different plane or frame.

Once all inputs have been processed in the network, the dependency tree is then ordered and optimized. The dependency tree looks similar to the COP network structure, though with multiple entries per node (one per plane to be cooked).

HDK_DataFlow_COP2_net.png
HDK_DataFlow_COP2_tree.png
How a network is broken down into a dependency tree for cooking C (color) for over1. While rotoshape and land_pic may have more planes, they are not used in this example and do not appear in the tree. More threads will attempt to cook rotoshape1 initially, since the scheduler attempts to serialize the file IO of the file COPs.

The last phase is the multithreaded image computation phase. Several worker threads traverse the tree, pulling off batches of tiles which need to be cooked for various nodes. Threads can work on different nodes concurrently or work together on one node (or in some combination). This provides good throughput for networks with lots of file IO, while allowing for cooperative processing of expensive nodes.

Each thread processes a single batch of tiles (TIL_Tile) at once, called a TIL_TileList. The tile list contains one to four tiles from one plane (depending on the vector size of the TIL_Plane). The virtual method COP2_Node::cookMyTile() can request image data from its inputs by using COP2_Node::inputTile() (for a single tile) or COP2_Node::inputRegion() (for a larger area). The image data must be contained within the area that was reported earlier in the second stage.

Generally, you don't need to know how the scheduler is scheduling the cooking of nodes. For a single node, you only need to know what input image data you need to produce the output that is being requested. There are also hints that can be given to the scheduler, such as to avoid cooking the same node with more than one thread.

A GPU (video card) can also be used to process image data, though the principal is the same - one tile list is processed at a time on the GPU.

Note
Because of the order of the cooking stages, data computed from a later stage cannot be used in an earlier stage. For example, image data cannot be used to determine the canvas size, as image data is computed in the last stage, and the canvas is computed in the stage before that.

Retrieving a Cooked Image

To get a raster from a COP, the cop needs to be opened successfully (like a file) and the sequence information should be queried as to the data available. Color and Alpha are always available, but other deep raster planes may also be present. Once a plane has been selected, COP2_Node::cookToRaster() can be called with an allocated TIL_Raster to cook the image. close() then needs to be called in order to cleanup.

An example of cooking an image:

TIL_Raster *getImageFromCop2(COP2_Node *node, float time, const char *pname = "C")
{
short key;
TIL_Raster *image = NULL;
if(node->open(key))
{
const TIL_Sequence *seq = node->getSequenceInfo();
if(seq)
{
const TIL_Plane *plane = seq->getPlane(pname);
int xres,yres;
seq->getRes(xres,yres);
if(plane)
{
image = new TIL_Raster(PACK_RGB, plane->getFormat(),
xres,yres);
if(seq->getImageIndex(time) == -1)
{
// out of frame range - black frame
float black[4] = {0,0,0,0};
image->clearNormal(black);
}
else
{
OP_Context context(time);
context.myXres = xres;
context.myYres = yres;
if(!node->cookToRaster(image, context, plane))
{
// error
delete image;
image = NULL;
}
}
}
}
}
// must be called even if open() failed.
node->close(key);
return image;
}

In addition to grabbing the image as-is, you can also specify a data format for the TIL_Raster which differs from the native COP's format. The resolution can also be reduced or enlarged, and the image will be cooked to that size.