amm

amm

About Me

Expertise
Not Specified
Location
Not Specified
Website

Connect

Recent Forum Posts

Understanding how Houdini cooks SOP nodes. June 21, 2018, 9:37 p.m.

According to my practical experience with Attribute VOPs, it creates a copy of modified (exported) attributes in certain VOP, so, node won't accumulate just everything created by nodes before. It's possible to avoid the copy by enabling ‘compute result in place’, but that's possible only if attribute that is read from first input, like P, is not modified.
When it comes to caching, I think it is definitively not a plain yes or no, generally Houdini is trying to adapt to what you're doing. More of selections all around means faster display, based on more caching, most likely.
It definitively behaves like tracking/checking system under the hood, before it evaluates anything.
You can force caching by locking the node. MMB over SOP/Attribute VOP should display memory usage and changes created by node. Generally it's really good in lazy evaluation. Let's say for deformations, it's enough to just have the independent of animation nodes evaluated before dependent ones, Houdini takes care about the rest. Groups are ‘total option’ if you want to limit the evaluation only to part of mesh.

Look At VOP node May 21, 2018, 5:27 p.m.

Dav_Rod
Hello,

the documentation about this node does not clarify too much (for me at least). Could somebody clarify a little bit about what applications is it useful for? Some test files to practice? When do you use it?


Thanks in advance.

When it comes to practice, it does the same as Aim/Direction constraint in DCC apps, perhaps this is one of most important constraints for rigging, also is a sort of complement to IK chain. Both are able to create rotation from input positions.
One a bit unusual thing is ‘from’. By leaving ‘from’ to zero and feeding two non-parallel vectors into ‘to’ and up vector it does exactly the same as Aim/Direction constraint.
If you just want a test file, I could offer this thing [vimeo.com] - probably is a way over complicated as example, anyway I'd believe it's full of ‘practical solutions’, typical for rigging and building the deformations.

Mocap Retarget To FBX RIG? May 21, 2018, 1:53 p.m.

Enivob
Assuming I have a pretty good match on the first T-pose/A-pose frame, how do I calculate the arm rotations from the mocap source and apply them to the rig target arms?

Just to answer, it's perfectly doable, but still it's not enough in many cases. In apps like Maya or Softimage, it's possible to create a bunch of Orient constraints between counterparts of rigs, to get the global orientation from anything to behave as local orientation. If I'm correct, closest thing in H is Object CHOP. In practice, Orient constraint is not enough if bone local orientation axes are not matching, so simple solution is to create ‘adapters’, nulls parented to target's bones, and to constrain to adapters, while each of them is rotated by 90 or 180 degree around one or two axes, if needed. If you look into some of Houdini mocap bipeds, you'll see ‘bnd’ objects having similar, somehow inverse functionality.

However in case of interaction with ground like walking or running, plain transfer of rotations will work correctly, only if proportions of everything bellow hips are same on both rigs (length ratio between thigh and shin bone, so on). Otherwise, you'll get sliding and ‘diving’ foots. So, as far as I know, pro solutions are a way more complicated, conceptually closer to motion trackers, instead of some plain recipe possible to share in few posts on forum.