Mocap Retarget To FBX RIG?

   20486   20   7
User Avatar
Member
2528 posts
Joined: June 2008
Offline
I put together a short python script to generate CHOPs expressions to link a .bclip mocap file to an imported FBX rig. It is basically functioning but the motion is tracking is not that great.

Does anyone know how to get a better match from CHOPs to RIG?

Attachments:
bclip_driver.gif (843.1 KB)
ap_mocap_retarget_share_050618.zip (2.7 MB)

Using Houdini Indie 20.0
Ubuntu 64GB Ryzen 16 core.
nVidia 3050RTX 8BG RAM.
User Avatar
Member
2528 posts
Joined: June 2008
Offline
I am basically at a dead-end on this.

I have posted on OdForum and both Discords, no one seems to know how to implement a re-targeting solution for mocap.

So how do the big studios do it?
Do they hand animate every single background character?
I thought, by now, that mocap was mature enough to include in a simulation based pipeline.
I can't seem to find the tool that connects mocap to a rig?
Edited by Enivob - May 18, 2018 09:20:55
Using Houdini Indie 20.0
Ubuntu 64GB Ryzen 16 core.
nVidia 3050RTX 8BG RAM.
User Avatar
Staff
3455 posts
Joined: July 2005
Offline
re-targeting isn't a straight forward task - most people use MotionBuilder.
you can use constraints in Houdini but it's not ideal.
Michael Goldfarb | www.odforce.net
Training Lead
SideFX
www.sidefx.com
User Avatar
Member
2528 posts
Joined: June 2008
Offline
Let's leave motion builder off the table for now. I don't have thousands of dollars to buy the program and the time learn yet another app. It would also take me away from CHOPs which I want to leverage for the hundreds of mocaps and multiple characters I need to process for a stadium shot.


If you take a look at my posted example file I am just routing the CHOPs motion to the target rig bone controllers, which happen to be FBX imported NULLs.

So there are pin and spring constraints, how would that help me drive one bone set from another?
Wouldn't a pin be the same as a COPY/PASTE Relative expression if I am only driving rotation?
Edited by Enivob - May 18, 2018 11:41:48
Using Houdini Indie 20.0
Ubuntu 64GB Ryzen 16 core.
nVidia 3050RTX 8BG RAM.
User Avatar
Staff
3455 posts
Joined: July 2005
Offline
unfortunately re-targeting is often more complicated than that…
here is a quick file - all I did was use the Blend Constraint off the shelf, set to Simple, just rots and scales, Keep Position - After.

it's not perfect but it's getting the job done.

Attachments:
ap_mocap_retarget_share_050618_SideFX.hiplc.zip (2.3 MB)

Michael Goldfarb | www.odforce.net
Training Lead
SideFX
www.sidefx.com
User Avatar
Member
2528 posts
Joined: June 2008
Offline
Ok, I went through the process of picking the source and target bones using the blend constraint shelf tool. The match looks a lot better but there are still some offset issues.

Is there a way to issue offsets to specific constraint axis?
Edited by Enivob - May 18, 2018 17:40:06

Attachments:
blend_constraint.gif (2.5 MB)

Using Houdini Indie 20.0
Ubuntu 64GB Ryzen 16 core.
nVidia 3050RTX 8BG RAM.
User Avatar
Member
806 posts
Joined: Oct. 2016
Offline
Moin,

in my experience, the biggest problem with retargeting is taking care of (limb) scaling. If your mocap knots do not match your target knots you are getting all kinds of offset issues. I started writing a tool that tried to compensate for that, but it's not trivial for arbitrary mocap files. So I ended up writing a rig-dependent script that would “simply” convert the mocap data scaling to match the respective rig.

Things like this could be (semi-)automated, but will probably always suffer from vastly varying rigs and data. There is a reason for Motion Builder's existence.

You asked:
So how do the big studios do it?
and, in my world, the best answer to your specific question has been given by Michael Goldfarb: Use the right tool for the job.

I would still like to write a “poor-man's motion builder”, even inside/for Houdini, but my fear is that this would still be a huge task and (paying) users are hard to come by. Today everyone who isn't a “big studio” tends to demand everything for free :-/

Marc
Edited by malbrecht - May 19, 2018 06:55:37
---
Out of here. Being called a dick after having supported Houdini users for years is over my paygrade.
I will work for money, but NOT for "you have to provide people with free products" Indie-artists.
Good bye.
https://www.marc-albrecht.de [www.marc-albrecht.de]
User Avatar
Member
2528 posts
Joined: June 2008
Offline
I have written some python code to construct a constraint network and populate it with the same nodes that the Blend Constraint shelf tool generate. However, my result causes the arm to disappear instead of function correctly.

Does anyone know what additional step the Blend Constraint Shelf Tool does that my code does not?
def installBlendConstraint(source_name,target_name):
    source_candidate = "%s/%s" % (bone_source, source_name)
    s = hou.node(source_candidate)
    if s != None:
        target_candidate = "%s/%s" % (bone_target, target_name)
        t = hou.node(target_candidate)
        if t != None:
            chopnet_name = "constraints"
            t.parm("constraints_on").set(1)
            t.parm("constraints_path").set(chopnet_name)
            
            # Create a chop network to manage our constraint nodes, inside this NULL.
            chopnet_node  = t.createNode("chopnet",chopnet_name)
            
            # First node created becomes the output.
            constraint_offset_node = chopnet_node.createNode("constraintoffset","offset")
            #constraint_offset_node.parm("vex_rate").set(30)     # Should be $FPS
            
            constraint_GWS_node  = chopnet_node.createNode("constraintgetworldspace","getworldspace")
            constraint_GWS_node.parm("obj_path").set("../..")
            
            constraint_geo_node = chopnet_node.createNode("constraintobject",s.name())
            constraint_geo_node.parm("obj_path").set(s.path())
            
            constraint_sequence_node = chopnet_node.createNode("constraintsequence","sequence")
            constraint_sequence_node.parm("blend").set(1)
            constraint_sequence_node.parm("writemask").set(504)    #Only rotate and scale.
            
            # Wire new nodes in.
            constraint_offset_node.setInput(0,constraint_sequence_node)
            constraint_offset_node.setInput(1,constraint_GWS_node)
            constraint_sequence_node.setInput(0,constraint_GWS_node)
            constraint_sequence_node.setInput(1,constraint_geo_node) 
            
            constraint_GWS_node.moveToGoodPosition()
            constraint_geo_node.moveToGoodPosition()
            constraint_sequence_node.moveToGoodPosition()
            constraint_offset_node.moveToGoodPosition()
Is there some additional programming task I need to do to make this code function like the shelf tool?
Edited by Enivob - May 19, 2018 09:23:32

Attachments:
Untitled-1.jpg (291.0 KB)

Using Houdini Indie 20.0
Ubuntu 64GB Ryzen 16 core.
nVidia 3050RTX 8BG RAM.
User Avatar
Member
2528 posts
Joined: June 2008
Offline
Ok, after poking around inside the constraint network it looks like my code needs to click the Update button on the newly created Offset node.

# Click the update button.
constraint_offset_node.parm("update").pressButton()
With this line of code my arm is working just like the shelf tool result.
Using Houdini Indie 20.0
Ubuntu 64GB Ryzen 16 core.
nVidia 3050RTX 8BG RAM.
User Avatar
Member
2528 posts
Joined: June 2008
Offline
So I am getting a pretty good match, overall, but as you can see the arms of the T-Pose do not align correctly. This causes the arms to be in collision with the body later on.

I tried adjusting the Rotational Pivot Offset for the controller and while I do get action out of axis XZ, Y seems to be locked and no value is accepted. But axis Y is the value I need to adjust.

I also tried inserting a Channel Wrangle between the CHOP fetch and the Sequence blend. But modifying the channels produces a lot of mesh explosion on the rig.

Is there some other way to add an offset to a specific constraint NULL?
Edited by Enivob - May 19, 2018 12:14:15

Attachments:
blend_constraint.gif (420.0 KB)
Untitled-1.jpg (231.3 KB)

Using Houdini Indie 20.0
Ubuntu 64GB Ryzen 16 core.
nVidia 3050RTX 8BG RAM.
User Avatar
Member
2528 posts
Joined: June 2008
Offline
I manually installed a few ChannelTransforms inside the constraint network. This allows me to offset certain nodes by a certain amount. This does work and I have a closer match on my T-POSE and the resulting animation has less inter-body arm collision.

The offsets work fine when walking forward, but when the mocap turns around, the offsets are wrong. I assume the offsets need reversed *=-1;

I am not sure how to detect which way the rig is walking to determine a reverse offset.
Is there some other way to detect rig direction?
Edited by Enivob - May 19, 2018 14:03:44

Attachments:
blend_constraint.gif (2.5 MB)
Untitled-1.jpg (136.1 KB)

Using Houdini Indie 20.0
Ubuntu 64GB Ryzen 16 core.
nVidia 3050RTX 8BG RAM.
User Avatar
Member
2528 posts
Joined: June 2008
Offline
Ok, I am abandoning offsetting the values inside the constraint network.

Instead it seems to be simpler to just offset the mocap CHOP value on the source bone itself. The main drawback is that visually the mocap rig no longer aligns with the target character rig, but that is not too important at this final stage.

Here you can see the affect of offsetting the CHOP values on the arms only.
Edited by Enivob - May 19, 2018 22:12:45

Attachments:
blend_constraint.gif (1.2 MB)

Using Houdini Indie 20.0
Ubuntu 64GB Ryzen 16 core.
nVidia 3050RTX 8BG RAM.
User Avatar
Member
2528 posts
Joined: June 2008
Offline
Success is so fleeting. While offsetting the mocap rig did work in that one instance shown above, when I load a new mocap session the alignment is lost.

I thought I'd try out the True Bones 500 Free set of BVH files. I get similar bad response. The arms and legs both don't work that well and the wrist is a floppy fish.
Edited by Enivob - May 20, 2018 14:00:51

Attachments:
blend_constraint.gif (2.5 MB)

Using Houdini Indie 20.0
Ubuntu 64GB Ryzen 16 core.
nVidia 3050RTX 8BG RAM.
User Avatar
Member
98 posts
Joined: Aug. 2014
Offline
Enivob
Success is so fleeting. While offsetting the mocap rig did work in that one instance shown above, when I load a new mocap session the alignment is lost.

I thought I'd try out the True Bones 500 Free set of BVH files. I get similar bad response. The arms and legs both don't work that well and the wrist is a floppy fish.

Thanks for link to True Bones 500. I just played with some files, it's looking like good mocap, properly filtered for final use.
Regarding your offsets, as general rule, you want T-pose as re-targeting pose, that is, your bone capture pose (bind pose in Maya) for something like MakeHuman model you probably used, in case of arms, should be some 45 degree off, not zero. In other words, bone capture pose and initial motion re-targeting pose are not same thing.
Also, for any motion re-targeting, bone local orientation axis and rotation order should be considered, too, even if you're using some quaternion interpolation to constraint bones (like probably these blend constraints). Houdini bone has fixed Z local orientation axis and ZYX rotation order - this actually is matching to usual older BVHs, but not to bvh files from this distribution. There is no general rule or standard. If, if Houdini style local orientation axis and rotation order is matching to mocap as well, that's only case where plain copy-paste of rotations is supposed to work.
Otherwise you'll want pro tool.
Beside Motion Builder, there are affordable apps able to do decent re-targeting like Maya LT (while it can't import bvh, FBX or Maya mb/ma is required).
For getting info about bvh, including local orientation axis and rotation order (or, is there a chance for copy-paste to work in Houdini) you could try BVHacker [www.bvhacker.com] utility.
User Avatar
Member
2528 posts
Joined: June 2008
Offline
Thanks for the info, I have tried flipping axis every which way.

Michael's Blend constraint solution does work for the spine, neck, head and legs. But the arms need some other solution. Is there some way to measure the angle of a bone against another bone then return a deviation result angle perhaps..?

I am looking for any alternate approach to making the arms track accurately.
Edited by Enivob - May 20, 2018 21:02:12

Attachments:
blend_constraint.gif (1.8 MB)

Using Houdini Indie 20.0
Ubuntu 64GB Ryzen 16 core.
nVidia 3050RTX 8BG RAM.
User Avatar
Member
98 posts
Joined: Aug. 2014
Offline
Enivob
Is there some way to measure the angle of a bone against another bone then return a deviation result angle perhaps..?

I am looking for any alternate approach to making the arms track accurately.

Of course there is a way to get angle, by dot product and trig functions, quaternion distance or else, however, applying the offset is different story. At the end of the day it's Euler rotation which is result of three rotations performed one after another, result could be everything but accurate. Also there's good chance for flipping caused by gimbal lock.
You could be just lucky sometimes, but not all the time.

Generally, you want your offset in parent space, similar to effect of animation layer, where entire rotation is rotated by parent. However, in any case, first what you want is initial re-targeting pose, where rotations are matching as much possible to T pose - arms aligned to world X axis, legs to Y.
User Avatar
Member
2528 posts
Joined: June 2008
Offline
in any case, first what you want is initial re-targeting pose, where rotations are matching as much possible to T pose - arms aligned to world X axis, legs to Y.

I basically have that, so what's next?

Assuming I have a pretty good match on the first T-pose/A-pose frame, how do I calculate the arm rotations from the mocap source and apply them to the rig target arms?

Simple expression routing from bone-to-bone does not seem to work. Blend constraint does not work either. So I assume it is more of an analysis of the source and target that produces a final “golden” rotation that I finally apply to the rotation fields on the rig.
Edited by Enivob - May 21, 2018 08:59:36
Using Houdini Indie 20.0
Ubuntu 64GB Ryzen 16 core.
nVidia 3050RTX 8BG RAM.
User Avatar
Member
2528 posts
Joined: June 2008
Offline
what you want is initial re-targeting pose, where rotations are matching as much possible to T pose - arms aligned to world X axis, legs to Y.

Following your advice does introduces a manual setup into the process. However, when I manually align the rig to the mocap t-pose, before Retargeting, I find that I can use Michael's Blend constraint technique on the arms with a pretty good alignment. I guess I was trying to avoid manual steps as much as possible. But in this case it does lead to success.

I can now swap mocap behind the scenes on my Retarget rig and still have a good synch/match.
Edited by Enivob - May 21, 2018 10:21:00

Attachments:
blend_constraint.gif (623.2 KB)
blend_constraint2.gif (919.1 KB)

Using Houdini Indie 20.0
Ubuntu 64GB Ryzen 16 core.
nVidia 3050RTX 8BG RAM.
User Avatar
Member
98 posts
Joined: Aug. 2014
Offline
Enivob
Assuming I have a pretty good match on the first T-pose/A-pose frame, how do I calculate the arm rotations from the mocap source and apply them to the rig target arms?

Just to answer, it's perfectly doable, but still it's not enough in many cases. In apps like Maya or Softimage, it's possible to create a bunch of Orient constraints between counterparts of rigs, to get the global orientation from anything to behave as local orientation. If I'm correct, closest thing in H is Object CHOP. In practice, Orient constraint is not enough if bone local orientation axes are not matching, so simple solution is to create ‘adapters’, nulls parented to target's bones, and to constrain to adapters, while each of them is rotated by 90 or 180 degree around one or two axes, if needed. If you look into some of Houdini mocap bipeds, you'll see ‘bnd’ objects having similar, somehow inverse functionality.

However in case of interaction with ground like walking or running, plain transfer of rotations will work correctly, only if proportions of everything bellow hips are same on both rigs (length ratio between thigh and shin bone, so on). Otherwise, you'll get sliding and ‘diving’ foots. So, as far as I know, pro solutions are a way more complicated, conceptually closer to motion trackers, instead of some plain recipe possible to share in few posts on forum.
User Avatar
Member
2528 posts
Joined: June 2008
Offline
I have manged to auto construct constraint networks linking each mocap source bone to a target null controller in the FBX rig. So far the results are promising. I can target four different rig types at this point.

Shown here are Render People, Make Human - Ergonomy, Make Human - Unity and Manuel Bastioni rig types targeted.
Edited by Enivob - May 29, 2018 22:10:13

Attachments:
unity_rig_type.gif (2.0 MB)

Using Houdini Indie 20.0
Ubuntu 64GB Ryzen 16 core.
nVidia 3050RTX 8BG RAM.
  • Quick Links