Houdini 20.5 Nodes Geometry nodes

Mocap Stream geometry node

This SOP outputs live motion capture data from various devices

On this page
Since 19.0

This SOP connects to motion capture software running on the local network and receives motion and/or face capture data in real time.

The process differs between devices, but in general the first step is to ensure that the motion capture software is running and is set up to stream data. Then select the appropriate device in Houdini, adjust the networking settings if necessary, and hit connect. If all goes well you should see the animated skeleton in Houdini (or for face capture the output will contain detail attributes describing the blendshape values).

To handle multiple actors you can place down multiple Mocap Stream SOPs and connect them to the same server, then select the desired actor to output from each one.

The node warnings can help diagnose networking issues.

The node also has the option to record the incoming data, then you can either playback the recorded animation or output a motionclip.

Tip

If you are experience latency issues on Wi-Fi, try using a direct Ethernet connection instead.

Tip

If you have problems connecting, try clicking on the View Log parameter button for more detailed information.

Parameters

Device

Device

The motion capture system to receive data from.

Faceware Studio

We will receive data from Faceware Studio. In Faceware Studio, you will need to open the 'Streaming Panel', where you can choose a port and your control schema. The control schema specifies the set of blendshapes that will be streamed into Houdini, you should make sure to choose the set matching your model. After selecting the options you should enable streaming.

In Houdini, enter the port you selected and the address of the machine running Faceware Studio, then connect.

Optitrack (Motive)

We will connect to Optitrack’s Motive software. In Motive you can use the data streaming pane to enable streaming and adjust the options. The defaults should work, though if Houdini is running on a different machine you’ll need to change the local interface off of loopback mode.

In Houdini you should just have to set the Host IP to the address of the machine running Motive, then connect. You shouldn’t need to change the other options unless you've changed the advanced settings in Motive, in which case you’ll need to update them to match.

In the event that the connection is not working, setting the Local IP parameter may help. This parameter is used to explicitly set the IP address of the machine running Houdini for the purpose of connecting to a multicast group. There is a list of the different IP addresses that identify your machine in the dropdown menu for this parameter. Select the IP address at the appropriate network level for your connection.

Perception Neuron (Axis Studio)

We will connect to the Axis Studio (or older Axis Neuron) software. In Axis Studio, open the BVH Broadcasting settings. Choose UDP as your protocol, and enter the IP of the machine running Houdini as your destination.

Also ensure that you have the following settings selected:

  • 'YXZ' rotations with displacement

  • Binary frame format

  • 'NUE' coordinate system.

Once you have enabled streaming, simply select the same port in Houdini and hit connect.

Qualisys

We will connect to the Qualisys Track Manager (QTM) software.

In QTM, load your scene then setup the 'Real-Time Output' options. To do so, open Project Options then navigate to the 'Processing → Real-Time Output' page. This page can be used to set the ports for streaming. We recommend using the default port numbers.

In Houdini, enter the port you selected for the Base Port in QTM and the IP address of the machine running QTM. Then hit Connect.

Back in Qualisys Track-Manager, start the real-time output of your scene.

Rokoko Studio

We’ll connect to the Rokoko Studio software. In Rokoko Studio, enable livestreaming and select Houdini. Enter the IP of the machine running Houdini, and the port you wish to connect on. Then in Houdini select the same port and hit connect.

Vicon (Shogun)

We’ll connect to Vicon’s Shogun software. There are currently no options you can choose within Shogun, in Houdini you should just leave the default port and enter the IP of the machine running Houdini. Then all you have to do is click connect.

Xsens MVN

We receive data from the Xsens MVN Software. In MVN, open the data streaming pane and add a new destination, entering the IP of the machine running Houdini, the port you wish to stream to, and the UDP protocol. The default options (position and quaternion, character meta data, scaling data) should work.

Then in Houdini select the port you chose and hit connect.

Raw UDP Data

This can be useful for prototyping streaming from new devices. All it does is start a UDP server on the given port, and output the most recently received packet’s contents in the data detail attribute. This is a common protocol for mocap data, you could wire this into a python SOP to parse the data and output a skeleton.

Note that the node will likely recook slower than you are receiving the packets so this approach doesn’t work if you need to read every packet, or if there’s state to keep track of across multiple packets. To accomplish this, you can create a C++ plugin.

Raw TCP Data

Works like the Raw UDP Data mode. This will connect a TCP client to the given port and IP, and then output each packet that is received.

Connect

Attempts to connect to the device. Note that multiple SOPs can be connected to the same device.

Disconnect

Disconnects from the device.

View Log

Opens a window where you can see the logs to debug your connection.

Facial Attributes

A pattern used to rename the blend shapes detail attributes that define facial motion. When naming these attributes all * characters in this pattern are replaced with the streamed name for the attribute. This pattern must contain a * character for it to be considered valid. If it does not contain a * character, the streamed name for the attribute will be used instead. This option is available for all streaming devices that can stream facial motion capture.

Configuration Attribute

The name of the configuration attribute that is created to store streamed joint information that can be defined by the Configure Joints SOP. Examples of this kind of information would include the rotation and translation limits of joints.

Convert Units

If enabled, the units of the streamed skeleton will be converted to the units defined by the Houdini scene.

Convert Up Axis

If enabled, the up axis of the streamed skeleton will be converted to the up axis defined by the Houdini scene.

Time Dependant

By default this is disabled, and the geometry will automatically refresh at most 60 times per second. For faster performance you can set this toggle which will stop the automatic refresh and mark the node time dependant. Then you can start playback with the real time toggle disabled to update as fast as possible.

Actors

A list of actor names to output. If you choose to output multiple actors with duplicate joint names then you will run into issues recording, and you may hit other issues downstream since KineFX expects unique names.

Recording

These parmeters are used to record the streamed skeleton(s) and decide how that recording is outputted.

Start Recording

Once you've connected to your device you can hit this to begin recording the live animation. Data will be recorded at the input framerate even if the viewport is refreshing slower.

This will overwrite any previously recorded data.

Stop Recording

Ends the recording. You will now be able to view the recorded animation.

Clear Recording

Clears the recording.

Duplicate Recording

This will duplicate the node, with the current recording stashed, so that you can record another take while still having access to the previous.

Output Recording

Once you have recorded some data you can flip this toggle to display it.

Output Type

Choose 'Single Frame' for normal frame by frame playback, or 'Motionclip' to output the entire recording as a motionclip.

Layering

These parameters define how the input skeleton and the output skeleton should be layered, and they allow you to combine motion capture performances from multiple sources at the same time.

These parameters are only available if an input is given and if the output is not a recorded MotionClip.

Base Layer

The skeleton which is treated the base layer. The other skeleton is the second layer. This defines the topology of the output skeleton. The detail attributes and point attributes on the second layer are then merged into the base layer, using the name and actorname attributes to match points. Finally, the second layer is blended into base layer using a similar process to the Skeleton Blend SOP.

Tip

If you are layering facial motion capture on top of a skeleton, set the Base Layer to be your skeleton.

Layer Components

The components of the local transforms which are blended into the base layer from the second layer.

Tip

If you are layering facial motion capture on top of a skeleton, select only the Rotation and Scale components. This ensures that the bones of your skeleton do not extend and contract.

Match Actor Name

If this is enabled, the actorname attribute will be used in addition to the name attribute when determining if a point in the base layer matches a point in the second layer. If either layer does not have an actorname attribute, points will be matched using only the name attirbute.

Inputs

Skeleton

A skeleton which can be layered with the output animated skeleton (streamed or recorded). Options for determine how these skeletons are layered can be found in the Layering folder.

Outputs

Skeleton or Motion Clip

Depending upon the values of the Output Recording and Output Type parameters, this output will be the skeleton(s) last received from the motion capture streaming device, the recording as an animated skeleton, or the recording as a MotionClip.

See also

Geometry nodes