Houdini Engine 6.2
 All Classes Files Functions Variables Typedefs Enumerations Enumerator Macros Groups Pages
Assets

Asset Library Files

Loading the Library File

The most common way to load assets is by first loading an asset library file (one of: .otl, .otllc, .hda, or .hdalc). This library file can contain multiple Houdini assets (HDAs).

The first thing to do is call HAPI_LoadAssetLibraryFromFile(). This will give you back a HAPI_AssetLibraryId which is a handle to the library that was just loaded. Keep it safe. You can also use the equivalent memory based function, HAPI_LoadAssetLibraryFromMemory(). Keep in mind that this memory variant will still produce a file on disk somewhere so the performance benefits are minor at this time. It is purely here as a convenience.

Note
There is some functional difference between HAPI_LoadAssetLibraryFromFile() and HAPI_LoadAssetLibraryFromMemory() regarding the saving of HIP scene files. See Saving a HIP File for details but basically if you use HAPI_LoadAssetLibraryFromFile() the OTL will be referenced by the HIP file by absolute path while if you use HAPI_LoadAssetLibraryFromMemory() the OTL will be contained within the HIP file. It's only relevant for debugging purposes though.
You can use the allow_overwrite parameter on either HAPI_LoadAssetLibraryFromFile() or HAPI_LoadAssetLibraryFromMemory() to control whether you want to allow overwriting asset definitions that have already been loaded from a different asset library file. If this flag is true and a clash is detected the function will return with a HAPI_RESULT_ASSET_DEF_ALREADY_LOADED result code.
Both the file-based and memory-based load library functions will try to checkout a license. See Licensing.

Query Assets in Library File

Next, query how many assets there are in the just-loaded library using HAPI_GetAvailableAssetCount(). Use the count to allocate an array of HAPI_StringHandle's and feed that into HAPI_GetAvailableAssets() to get the actual asset names.

Node Creation

Finally, feed any asset's name into HAPI_CreateNode(), exactly as given by HAPI_GetAvailableAssets(), to actually create your asset node in the underlying Houdini scene. Pass -1 for the parent_node_id argument. It will figure out the parent node from the name.

Note
HAPI_CreateNode() will try to checkout a license. See Licensing.

Once HAPI_CreateNode() returns successfully, you will have a HAPI_NodeId (a typedefed int) which is a handle to the underlying asset node. Hang on to this handle as it's the only way to talk to HAPI about the node you just created.

If the cook_on_creation argument is true, HAPI_CreateNode() will return immediately even in threaded mode (that is, use_cooking_thread was set to true in HAPI_Initialize()). HAPI will create the node and cook it so that it is ready to use. This is really just calling HAPI_CookNode() for you. See Cooking on how to check progress.

Note
If in threaded mode, HAPI_CreateNode() will always return HAPI_RESULT_SUCCESS immediately. You'll need to see Cooking and check the progress to find out if the asset instantiated and cooked successfully.

If you choose to set cook_on_creation to false you'll need to call HAPI_CookNode() first, before attempting to use the asset - see Cooking. However, you can still get some information about the instantiated asset via HAPI_GetAssetInfo() and you can use the asset node's HAPI_NodeId to change parameters on the asset before the first cook. To see if an asset node has ever been cooked there's a flag, HAPI_AssetInfo::hasEverCooked.

This HAPI_GetAssetInfo() function takes the HAPI_NodeId from the call to HAPI_CreateNode(), and fills in a HAPI_AssetInfo struct on successful return. HAPI_AssetInfo gives you basic, high level information about your asset such as the name of the asset, the asset label as defined in Houdini, the number of handles, and so on.

To get the node information, as a HAPI_NodeInfo, of the asset node, call HAPI_GetNodeInfo(). For asset nodes, you will probably want to look into the HAPI_NodeInfo::uniqueHoudiniNodeId member. The idea here is that in saving a session in the host application, one might just serialize all the data structures pertaining to HAPI, such as the HAPI_AssetInfo and HAPI_NodeInfo structs. On reloading of the saved file, there might not be a session of Houdini Engine currently running, or, even if there is, it might have different things loaded in it. We then need to establish whether the asset that has just been reloaded has a corresponding valid entry on the Houdini Engine side. The HAPI_NodeInfo::id is not sufficient in this case, as it may clash with other nodes across different sessions. Here is where the HAPI_NodeInfo::uniqueHoudiniNodeId comes in handy. Whenever any node is instantiated, Houdini Engine will also pass back Houdini's globally unique node id, which can be used later to determine whether the saved asset node has a valid conterpart on the Houdini Engine side. If not, then asset node needs to be re-instantiated.

Cooking

Once an asset node is create, it needs to be cooked before its results are ready to be used. This is done by calling HAPI_CookNode().

In threaded mode, this cook happens asynchronously. Your next call should be to HAPI_GetStatus(), with HAPI_STATUS_COOK_STATE as the status_type, which will return to you the current state of that cook. Keep calling HAPI_GetStatus() until it returns one of the READY states - a state less than or equal to HAPI_STATE_MAX_READY_STATE. Here's what the different READY states mean:

  • HAPI_STATE_READY: Everything cooked successfully without errors.
  • HAPI_STATE_READY_WITH_FATAL_ERRORS: Something really bad happened and the entire cook was halted as a result. For example, the asset library file (OTL/HDA) is currupt and the asset node cannot even be instantiated. You should halt immidiatly in this case and not continue with the usual post-cook info queries.
  • HAPI_STATE_READY_WITH_COOK_ERRORS: One or more of the child nodes of the node being cooked failed reported errors. In such cases, the errors from each problem child node will be concatenated together but the cook will continue, trying to cook all child nodes. You should display these errors as warnings because they are likely not fatal. For example, the asset node is still waiting for an external input to be connected.

You can get even better information on cook progress using HAPI_GetStatusStringBufLength() followed by HAPI_GetStatusString() for current cook step descriptions and HAPI_GetCookingCurrentCount() / HAPI_GetCookingTotalCount() for an idea of percentage completion.

Here is a code snippet to show the status update loop in threaded mode:

ENSURE_SUCCESS( HAPI_CookNode( nullptr, asset_id, nullptr ) );
int status;
do
{
int statusBufSize = 0;
nullptr,
&statusBufSize
) );
char * statusBuf = NULL;
if ( statusBufSize > 0 )
{
statusBuf = new char[ statusBufSize ];
ENSURE_SUCCESS( HAPI_GetStatusString(
statusBuf, statusBufSize ) );
}
if ( statusBuf )
{
// Display the cook status to the user in your host
delete[] statusBuf;
}
// Add in sleep here to avoid printing the status too many times
HAPI_GetStatus( nullptr, HAPI_STATUS_COOK_STATE, &status );
} while ( status > HAPI_STATE_MAX_READY_STATE );
ENSURE_COOK_SUCCESS( status );

Where each of ENSURE_SUCCESS and ENSURE_COOK_SUCCESS are macros:

#define ENSURE_SUCCESS( result ) \
if ( (result) != HAPI_RESULT_SUCCESS ) \
{ \
std::cout << "failure at " << __FILE__ << ":" << __LINE__ << std::endl; \
std::cout << get_last_error() << std::endl; \
exit( 1 ); \
}
#define ENSURE_COOK_SUCCESS( result ) \
if ( (result) != HAPI_STATE_READY ) \
{ \
std::cout << "failure at " << __FILE__ << ":" << __LINE__ << std::endl; \
std::cout << get_last_cook_error() << std::endl; \
exit( 1 ); \
}
static std::string
get_last_error()
{
int buffer_length;
nullptr,
char * buf = new char[ buffer_length ];
nullptr, HAPI_STATUS_CALL_RESULT, buf, buffer_length );
std::string result( buf );
delete [] buf;
return result;
}
static std::string
get_last_cook_error()
{
int buffer_length;
nullptr,
char * buf = new char[ buffer_length ];
nullptr, HAPI_STATUS_CALL_RESULT, buf, buffer_length );
std::string result( buf );
delete [] buf;
return result;
}

In single threaded mode, the cook happens inside of HAPI_CookNode(), and HAPI_GetStatus() will immediately return HAPI_STATE_READY.

In both cases, even if you completely omit calling HAPI_GetStatus(), the next API method that you call that relies on the cooked results will block until the cook is finished.

It is generally important to check the return codes and error messages of all API calls but with HAPI_CookNode it is especially important. This is because if anything bad happened during the cook it means most if not all proceeding API calls on this node will fail or cause other problems. See Return Codes and Error Strings for details.

Transforms

For all OBJ asset nodes there are two sets of transforms. First, there is the transform in the host environment, describing where asset sits in the host. Underneath the covers, however, there is another transform, which indicates where this asset OBJ node sits inside Houdini Engine (the underlying Houdini scene). Usually, these won't match up. For example a user could drag the asset to any location in the host but not have it move in the Houdini scene.

It is important to make sure that the two sets of transforms - the one in the host and the corresponding transform in the Houdini scene - match up. To this end, we have two functions:

The first of these queries where an asset OBJ node is in the Houdini scene and the second sets the location of the asset in the Houdini scene. When using these functions, please remember to account for the differences in axis orientation and handedness between your host application and Houdini. See Utility Functions.

All this is explained in more detail under: Object Transforms