Houdini Engine 3.2
 All Classes Files Functions Variables Typedefs Enumerations Enumerator Macros Pages
Sessions

Introduction

The concept of sessions was introduced in Houdini Engine 2.x. In prior versions of Houdini Engine, the Houdini Engine dynamic library (libHAPI) linked directly to core components of Houdini and Houdini Engine hosts in their turn were typically linked to libHAPI. This meant that the Houdini Engine implementation ran in the host process.

This setup had two major disadvantages:

  • since libHAPI loaded core Houdini libraries in the host process and those libraries have multiple third-party dependencies, there was a high probability of them conflicting with the host application's own dependencies;
  • there was only one Houdini scene state supported per process.

Houdini Engine 3.x still supports this in-process scenario, and for simple hosts it's still probably a good choice, however a new inter-process communication (IPC) mechanism is also available, making it possible for the host application to run one or multiple Houdini Engine sessions in its own process or in separate processes, in parallel and on another machine if desired.

Every such session is represented by a HAPI_Session parameter passed by pointer into most API calls.

Out-of-process

If on the other hand you want to take advantage of an IPC implementation, or want to be able to easily switch between the IPC and the in-process implementation with low overhead, you should link your application to libHAPIL instead. HAPIL stands for Houdini Engine API Loader and is a "shim" library which can load libHAPI, libHARC (our IPC client implementation) or even a custom user-supplied dynamic library.

libHAPIL exports functions declared in HAPI.h. Most of these functions are redirected to a dynamically loaded implementation library such as libHAPI or libHARC but others are special functions for loading libraries and creating sessions, and are implemented in libHAPIL itself.

If your application is linked to libHAPIL, you MUST explicitly create a session before calling any HAPI function that uses a session as an input parameter, including HAPI_Initialize().

For example, HAPI_CreateInProcessSession() loads libHAPI dynamically and returns a singleton in-process session in its output parameter. Please note that no Houdini libraries and dependencies are loaded in the host process until this function is called. This could be helpful for solving dependency problems.

To simplify migration, it is acceptable to pass a NULL pointer to represent the singleton in-process session after creating it with HAPI_CreateInProcessSession().

If you'd like to use our Thrift-based IPC implementation, you can create a session with HAPI_CreateThriftSocketSession() (which uses TCP sockets as the transport layer) or HAPI_CreateThriftNamedPipeSession() (which uses named pipes on Windows or domain sockets on Unix-like OSes). The first time a Thrift session is created, libHAPIL loads libHARC - the client-side dynamic library implementing Thrift bindings for HAPI calls.

You can even implement your own custom IPC client library, bind it to one of custom slots in the HAPI_SessionType enumeration with HAPI_BindCustomImplementation() and create sessions specific to this implementation with HAPI_CreateCustomSession(), passing the required session creation parameters as a raw pointer.

Finally, when you're done with a Houdini Engine session, you can close it with HAPI_CloseSession(). In the libHARC implementation, this will close the client-server connection. If the session is not closed explicitly with HAPI_CloseSession() during the client's lifetime, it will be closed automatically when the client process terminates.

When a session is created with the auto-close option, the associated server process is terminated when the last session is closed. Note that on non-Windows systems it is necessary to handle the SIGCHLD event in the host application for the server process to fully terminate and return the license.

Note that multiple sessions can be open and multiple implementation libraries can be loaded simultaneously.

Here's a simple example showing how to start and connect to an out of process session, and initialize the Houdini Engine API.

// HARS server options
HAPI_ThriftServerOptions serverOptions{ 0 };
serverOptions.autoClose = true;
serverOptions.timeoutMs = 3000.0f;
// Start our HARS server using the "hapi" named pipe
// This call can be ignored if you have launched HARS manually before
if (HAPI_RESULT_SUCCESS != HAPI_StartThriftNamedPipeServer(&serverOptions, "hapi", nullptr))
return false;
// Create a new HAPI session to use with that server
HAPI_Session session;
return false;
// Initialize HAPI
&session, // session
&cookOptions, // cook options
true, // use_cooking_thread
-1, // cooking_thread_stack_size
"", // houdini_environment_files
nullptr, // otl_search_path
nullptr, // dso_search_path
nullptr, // image_dso_search_path
nullptr )) // audio_dso_search_path
{
return false
}

In-Process

If you are not interested in using IPC and would like to simply run Houdini Engine in-process, you can simply link your host application to libHAPI directly, and won't need to call any additional functions before you can successfully call HAPI_Initialize().

The libHAPI dynamic library contains the actual implementation of Houdini Engine functions and interfaces directly with the core Houdini components, but doesn't contain the implementation of IPC mechanisms.

If porting code from an eatlier HAPI 1.x code base, you will only need to add a HAPI_Session pointer parameter to all HAPI calls. When using in-process, that extra session pointer parameter will be ignored, so you may want to set it to NULL for consistency.

HARS (Houdini-Engine API Remote Server)

When using libHARC (the Thrift-based IPC library) on the client side, you need to start the server executable HARS, that is included in your Houdini installation. HARS is a console application with simple command-line arguments:

$ HARS -h
Allowed options:
  -h [ --help ]             Produce this help message
  -s [ --socket-port ] arg  The server port, if using a TCP socket session
  -n [ --named-pipe ] arg   The name of the pipe, if using a named pipe session
  -a [ --auto-close ]       Close the server automatically when all client
                            connections are closed
  -r [ --ready-handle ] arg Event to signal when the server is ready to serve
                            (for automated server startup)

HARS links directly to libHAPI and to core Houdini libraries and their dependencies. Since Thrift IPC is cross-platform, the host process (using libHAPIL and libHARC) and the server process (HARS) can be built for and run on different platforms.

You can start a HARS process from the command line or from a pipeline script with the desired options and then establish a client connection to the server in the C++ code of your application using the respective session creation function. Alternatively, you can use one of the convenience functions HAPI_StartThriftSocketServer() and HAPI_StartThriftNamedPipeServer() to start the server from C++ code before creating a session. Both of these functions block until the server signals that it's ready to serve, so once one of them returns success, it is safe to create a session with matching parameters.

Please note that the HARS server currently only supports a single client connection. If a client is connected to the server and another client tries to connect, then the second client will block until the first connection is closed.

Houdini Engine Debugger

An extremly convenient way to inspect and debug houdini engine code, is to use the Houdini Engine Debugger and connect to a live, Houdini session.

The Houdini Engine Debugger creates a HARS process in an existing instance of Houdini that you can connect to as you would normally do with a regular HARS session. It has the added benefit of letting you see in realtime the actions of your Houdini Engine code, and interact with the session scene as you would normally do when using Houdini.

Please note, that it is possible for you to modify, and use the houdini scene normally while the debugger session, but the host application will not necessarily be aware of the changes that you have made.

To create a Houdini Engine Debugger session:

  • In houdini, open "Houdini Engine Debugger" in the windows menu.
  • Set the session type and value (TCP / Named pipe) to match the setting that your host application uses.
  • Press "start" to start the Houdini Engine session. The Houdini Engine Debugger window can then be closed.
  • the HAPI_CreateThriftNamedPipeSession/HAPI_CreateThriftSocketSession functions can be used to connect to that session, like you would with "standard" HARS sessions.
  • As you're using you host application, you will be able to see your hda nodes, geometry, input data etc.. appear in the viewport and network editor.
  • The session can be stopped or restarted by reopening the Houdini Engine Debugger and pressing "stop".

Multithreading

Using the libHARC Thrift RPC implementation, one host application process can interact with multiple HAPI sessions, each representing its own HARS process with its separate Houdini state. This opens up new multithreading possiblities for applications using Houdini Engine. libHARC is designed to be thread-safe when using multiple sessions from multiple threads.

Access to each session is protected with a mutex, so mutltiple threads can access one session safely but it's impossible to truly parallelize operations using just one session. The client code may also need to synchronize operations on a single session in cases when the order of operations is important. For example, when marshalling geometry, it doesn't matter in what order geometry attributes are set but it's important that HAPI_CommitGeo() is called after all of them have been set.

With multiple sessions on the other hand, parallelization becomes possible because their Houdini contexts are completely independent. The following source sample illustrates how reading and writing volume data per tile can be parallelized between asynchronous tasks by using separate sessions for reading and writing.

class testharcSession
{
public:
testharcSession()
{
mySession.type = HAPI_SESSION_MAX;
mySession.id = 0;
}
virtual ~testharcSession()
{
if ( mySession.type != HAPI_SESSION_MAX )
{
HARC_TEST_SUCCESS( HAPI_IsSessionValid( &mySession ) );
HARC_TEST_SUCCESS( HAPI_CloseSession( &mySession ) );
HARC_TEST_ASSERT(
);
}
}
virtual void open() = 0;
const HAPI_Session* get()
{
return &mySession;
}
protected:
HAPI_Session mySession;
};
class testharcSimpleAutoSession : public testharcSession
{
public:
testharcSimpleAutoSession(
const char* session_name_suffix,
size_t task_id,
const HAPI_ThriftServerOptions& server_options
)
{
std::ostringstream pipe_name_os;
pipe_name_os << "/tmp/testharc_threading_"
<< session_name_suffix << task_id << '_'
#ifdef WIN32
<< GetCurrentProcessId();
#else
<< getpid();
#endif
myPipeName = pipe_name_os.str();
myServerOptions = server_options;
myServerOptions.autoClose = true;
}
virtual ~testharcSimpleAutoSession()
{
if ( mySession.type != HAPI_SESSION_MAX )
{
HARC_TEST_SUCCESS( HAPI_IsInitialized( &mySession ) );
HARC_TEST_SUCCESS( HAPI_Cleanup( &mySession ) );
#ifndef WIN32
::unlink(myPipeName.c_str());
#endif
}
}
virtual void open()
{
HARC_TEST_SUCCESS( HAPI_StartThriftNamedPipeServer(
&myServerOptions, myPipeName.c_str(), nullptr
));
&mySession,
myPipeName.c_str()
));
cook_options.splitGeosByGroup = false;
HARC_TEST_SUCCESS( HAPI_Initialize(
&mySession, &cook_options, true, -1,
nullptr, nullptr, nullptr, nullptr, nullptr
));
}
private:
HAPI_ThriftServerOptions myServerOptions;
std::string myPipeName;
};
class testharcCopyTileValues
{
public:
testharcCopyTileValues(
const HAPI_Session* dst_session,
const HAPI_Session* src_session,
HAPI_NodeId node_id,
HAPI_PartId part_id,
HAPI_NodeId input_node_id,
const HAPI_VolumeTileInfo& tile_info,
int tile_value_count
)
: myDstSession(dst_session)
, mySrcSession(src_session)
, myNodeId(node_id)
, myPartId(part_id)
, myInputNodeId(input_node_id)
, myTileInfo(tile_info)
, myTileValueCount(tile_value_count)
{
}
void operator()()
{
// Allocate tile data buffer.
std::vector<float> tile_values(
static_cast<size_t>(myTileValueCount), -5.8f
);
// Get the color data.
HARC_TEST_SUCCESS( HAPI_GetVolumeTileFloatData(
mySrcSession,
myNodeId, myPartId,
-8.8f,
&myTileInfo, &tile_values.front(), myTileValueCount
));
// Set the color data on the input volume.
HARC_TEST_SUCCESS( HAPI_SetVolumeTileFloatData(
myDstSession,
myInputNodeId,
0,
&myTileInfo, &tile_values.front(), myTileValueCount
));
}
private:
const HAPI_Session * myDstSession;
const HAPI_Session * mySrcSession;
HAPI_NodeId myNodeId;
HAPI_PartId myPartId;
HAPI_NodeId myInputNodeId;
HAPI_VolumeTileInfo myTileInfo;
int myTileValueCount;
};
void
testharcCopyVolume(
std::shared_ptr<testharcSession> src_session,
std::shared_ptr<testharcSession> dst_session
)
{
src_session->open();
// Load the library from file.
HAPI_AssetLibraryId library_id = -1;
HARC_TEST_SUCCESS( HAPI_LoadAssetLibraryFromFile(
src_session->get(), "HAPI_Test_Volumes_HoudiniFogColor.otl",
false, &library_id
));
HARC_TEST_ASSERT( library_id >= 0 );
// Instantiate the asset.
HAPI_NodeId node_id = -1;
HARC_TEST_SUCCESS( HAPI_CreateNode(
src_session->get(), -1, "Object/HAPI_Test_Volumes_HoudiniFogColor",
nullptr, true, &node_id
));
HAPI_GeoInfo geo_info;
HARC_TEST_SUCCESS( HAPI_GetDisplayGeoInfo(
src_session->get(), node_id, &geo_info ) );
// Get the part info for the second part which should be the red volume.
const HAPI_PartId part_id = 1;
// Get the part info.
HAPI_PartInfo part_info;
HARC_TEST_SUCCESS( HAPI_GetPartInfo(
src_session->get(), geo_info.nodeId, part_id, &part_info ) );
// Get the volume info.
HAPI_VolumeInfo volume_info;
HARC_TEST_SUCCESS( HAPI_GetVolumeInfo(
src_session->get(), geo_info.nodeId, part_id, &volume_info
));
HARC_TEST_ASSERT( volume_info.tupleSize == 1 );
HARC_TEST_ASSERT( !volume_info.hasTaper );
dst_session->open();
// Create input asset to receive volume.
HAPI_NodeId input_node_id = -1;
HARC_TEST_SUCCESS( HAPI_CreateInputNode(
dst_session->get(), &input_node_id, "Input_Volume"
));
// Set the part info.
HARC_TEST_SUCCESS( HAPI_SetPartInfo(
dst_session->get(), input_node_id, 0, &part_info ));
// Set the volume info.
HARC_TEST_SUCCESS( HAPI_SetVolumeInfo(
dst_session->get(), input_node_id, 0, &volume_info
));
// Get the first volume tile.
HARC_TEST_SUCCESS( HAPI_GetFirstVolumeTile(
src_session->get(), geo_info.nodeId, part_id, &tile_info
));
HARC_TEST_ASSERT( tile_info.isValid );
std::vector< std::future< void > > futures;
const int tile_value_count =
volume_info.tileSize *
volume_info.tileSize *
volume_info.tileSize *
volume_info.tupleSize;
while ( tile_info.isValid )
{
futures.push_back(
std::async(
std::launch::async,
testharcCopyTileValues(
dst_session->get(), src_session->get(),
geo_info.nodeId, part_id, input_node_id,
tile_info, tile_value_count
)
)
);
// Get the next color tile.
HARC_TEST_SUCCESS( HAPI_GetNextVolumeTile(
src_session->get(),
geo_info.nodeId, part_id,
&tile_info
));
}
for ( auto & future : futures )
future.get();
// Commit the volume inputs.
HARC_TEST_SUCCESS( HAPI_CommitGeo(
dst_session->get(), input_node_id
));
// Cook the asset.
HARC_TEST_SUCCESS( HAPI_CookNode(
dst_session->get(), input_node_id, nullptr
));
HARC_TEST_ASSERT( testharcVerifyInputVolume_HoudiniVolume(
dst_session->get(), input_node_id, 0
));
}

Finally, session management functions (session creation functions, HAPI_IsSessionValid() and HAPI_CloseSession()) share a mutex, however they don't interfere with normal HAPI functions called concurrently, with the exception that HAPI_CloseSession() invalidates the session and passing it to a HAPI function later will result in undefined behaviour. You can always safely call HAPI_IsSessionValid() to determine if a session is valid.