Unfortunately you're running up against the current state of KineFX. All the tools are there at a low level, but a lot of the artist facing tools you'd find in most other DCCs aren't (yet) there. It is still very much a TD tool. Artist-friendly controls have to be written. I know side effects is working on adding more tools. They started with tools aimed at a mocap/retargeting workflow, but basic character animation tools are still missing.
So there's no tool (to my knowledge) to just zero things out. But there's easier ways than using inverse matrices. You can easily set a child's local transform to identity or copy the parent position and transform to the child. It would be easy to wrap something like this up in and HDA and you've have an easy tool to use. But again, it's up to you to write it (at the moment)...
Found 63 posts.
Search results Show results as topic list.
Technical Discussion » KineFX - Zero out childs local transforms to match to parent
- made-by-geoff
- 63 posts
- Offline
Technical Discussion » Set Driven Keys in KineFX
- made-by-geoff
- 63 posts
- Offline
It's been years since I worked in Maya, but at its most basic aren't set driven keys in Maya just relative references in Houdini with the addition of some ramp parameters to affect how the driven reacts to the driver?
The Blend Pose tool wraps some of that up into a single (somewhat outdated) interface, but what it sets up can easily be recreated in SOPs with KineFX.
I would normally set up some wrangles that would give me various controls for how the driver affects the driven joint, but here's a very simple example of a controller driving a skeleton blendshape, like what you might do for a master finger curl controller.
Again, my Maya memory is foggy, so sorry if I misinterpreted the question.
The Blend Pose tool wraps some of that up into a single (somewhat outdated) interface, but what it sets up can easily be recreated in SOPs with KineFX.
I would normally set up some wrangles that would give me various controls for how the driver affects the driven joint, but here's a very simple example of a controller driving a skeleton blendshape, like what you might do for a master finger curl controller.
Again, my Maya memory is foggy, so sorry if I misinterpreted the question.
Technical Discussion » Houdini kinefx ikfk switching
- made-by-geoff
- 63 posts
- Offline
Ugggh. I went down this same rabbit hole. It's a bear. I didn't find any way to approach this like you are used to in other apps without extensive Python scripting that went beyond my capabilities.
That said, if you have the flexibility, it's worth re-think what it is you are trying to accomplish and whether there isn't an easier way given the strengths of KineFX. Part of the reason we need FK/IK switching is because we're used to the idea of working with a single rig at a time. But KineFX makes that way of thinking less necessary.
Why not have an IK rig, do what you need to do with it, and then append a set of FK controls afterwards for fine-tuning? That may not be exactly what you are trying to do, but my point is that KineFX has made me think about rigging differently: as a series of rigging layers, instead of a single rig that has to match FK and IK, if that makes sense. And as SWest says, there's also lots of ways to use blend nodes to blend between IK and FK rigs.
Post some more specifics of what you're trying to do and we can try to find a solution.
That said, if you have the flexibility, it's worth re-think what it is you are trying to accomplish and whether there isn't an easier way given the strengths of KineFX. Part of the reason we need FK/IK switching is because we're used to the idea of working with a single rig at a time. But KineFX makes that way of thinking less necessary.
Why not have an IK rig, do what you need to do with it, and then append a set of FK controls afterwards for fine-tuning? That may not be exactly what you are trying to do, but my point is that KineFX has made me think about rigging differently: as a series of rigging layers, instead of a single rig that has to match FK and IK, if that makes sense. And as SWest says, there's also lots of ways to use blend nodes to blend between IK and FK rigs.
Post some more specifics of what you're trying to do and we can try to find a solution.
Solaris and Karma » Randomizing instanced variants
- made-by-geoff
- 63 posts
- Offline
Technical Discussion » Issues with Karma renderer using OpenColorIO
- made-by-geoff
- 63 posts
- Offline
I'm not 100% but I don't know if "aces" is an acceptable output space. Wouldn't it need to be ACEScg or ACES2065-1 or something like that. Use the menu on the right of the Output space field to see the available color spaces.
ACEScg has been working with Karma on my end.
ACEScg has been working with Karma on my end.
Solaris and Karma » Randomizing instanced variants
- made-by-geoff
- 63 posts
- Offline
Thank you Mark! Very helpful. From there I was able to randomize either with a wrangle or the set variants LOP.
However, I noticed that in the wrangle I had to randomize the variant index by @elemnum:
Whereas in the LOP, I had to use @prim in the variant name index:
@prim%3
I'm guessing that has to do with VEX vs. Expressions, but are those attributes listed somewhere? I found the USD VEX page, but it took a bunch of trial and error to figure out @prim.
However, I noticed that in the wrangle I had to randomize the variant index by @elemnum:
string variants_mtl[] = usd_variants(0, "/instancer1/Instance0", "mtl"); int random = @elemnum%3; usd_setvariantselection(0, @primpath, "mtl", variants_mtl[random]);
Whereas in the LOP, I had to use @prim in the variant name index:
@prim%3
I'm guessing that has to do with VEX vs. Expressions, but are those attributes listed somewhere? I found the USD VEX page, but it took a bunch of trial and error to figure out @prim.
Solaris and Karma » Randomizing instanced variants
- made-by-geoff
- 63 posts
- Offline
New to solaris and usd, but I've gone through the basic overviews and tutorials. Trying to understand better how to manipulate different levels of data in LOPs.
So let's say I've got an asset referenced in with 4 geometry variants and 4 material variants (so a total of 16 variants).
When I go to instance, instead of creating 16 different prims for each unique variant and instancing them as a collection, what's the most efficient way to randomize the instances.
Is there a way I can instance a single prim and randomize the variants after (or with) the instancer? Or do I have to establish all the possible variants ahead of the instancer?
So let's say I've got an asset referenced in with 4 geometry variants and 4 material variants (so a total of 16 variants).
When I go to instance, instead of creating 16 different prims for each unique variant and instancing them as a collection, what's the most efficient way to randomize the instances.
Is there a way I can instance a single prim and randomize the variants after (or with) the instancer? Or do I have to establish all the possible variants ahead of the instancer?
Technical Discussion » How to fix KineFX incorrect weights in shallow space mesh?
- made-by-geoff
- 63 posts
- Offline
KineFX, while promising, is still a bit of a work in progress. Most of the tools are there, but it's still a bit counter-intuitive and because it's so newish (added to the fact that Houdini just isn't used a lot for character rigging and animation) there's still not a ton of documentation.
When you reduced the tetembed down to 0.2 and looked at the resulting mesh, were you still getting overlapping geo in the fingers? If you were, it is going to create problems. You can try a few other things:
-- reduce it down further. I've had meshes work at 0.0.
-- try a tetconform instead.
-- try a proximity capture instead of biharmonic (at least for the hands)
-- there's probably also a way to manipulate the bone capture lines to do something similar to adjusting the capture envelope, but I'm not sure off the top of my head. I'd have to poke around.
-- You can always go in and adjust the t-pose mesh so there's a little more space between the middle and ring finger before you do the capture.
-- Or, as I mentioned, remove the fingers from the bone capture lines SOP or try using a capture correct SOP to weight them to 0 and hand paint the fingers.
If you want to post a reduced .hip file, I'll try to take a look this weekend if I have some spare time.
When you reduced the tetembed down to 0.2 and looked at the resulting mesh, were you still getting overlapping geo in the fingers? If you were, it is going to create problems. You can try a few other things:
-- reduce it down further. I've had meshes work at 0.0.
-- try a tetconform instead.
-- try a proximity capture instead of biharmonic (at least for the hands)
-- there's probably also a way to manipulate the bone capture lines to do something similar to adjusting the capture envelope, but I'm not sure off the top of my head. I'd have to poke around.
-- You can always go in and adjust the t-pose mesh so there's a little more space between the middle and ring finger before you do the capture.
-- Or, as I mentioned, remove the fingers from the bone capture lines SOP or try using a capture correct SOP to weight them to 0 and hand paint the fingers.
If you want to post a reduced .hip file, I'll try to take a look this weekend if I have some spare time.
Technical Discussion » In Rig Pose node, how to directly rotate bones?
- made-by-geoff
- 63 posts
- Offline
Not sure I 100% understand, but I usually approach it two different ways. The problem, I think, is that you want to have 2 different children of the pelvis joint-- the upper legs and the spine. But you can't have two opposite children. So instead you have to split the legs from the spine.
1. Separate the base of the spine from the pelvis by creating a second joint close to the pelvis, but constrainted to it. Translating the pelvis will still move the spine around, but you can then rotate the spine without affecting the legs.
2. Of course, you can also use an IK spline set up to control the bend of the spine. I usually find that easier to animate and more natural. But I'm guessing you have your reasons for wanting a strictly FK set up.
1. Separate the base of the spine from the pelvis by creating a second joint close to the pelvis, but constrainted to it. Translating the pelvis will still move the spine around, but you can then rotate the spine without affecting the legs.
2. Of course, you can also use an IK spline set up to control the bend of the spine. I usually find that easier to animate and more natural. But I'm guessing you have your reasons for wanting a strictly FK set up.
Technical Discussion » adding an offset to the frame number of an image sequence
- made-by-geoff
- 63 posts
- Offline
This is Redshift specific, but it's what I've got handy. Should be able to adapt it:
-- First edit the parameter interface on the RS material builder VOP and drag the filename field from the texture node inside the VOP into the interface, effectively promoting that file path field so it is exposed on the VOP.
-- Then add a new stylesheet and style to your object
-- Add a target: point instances
-- Add a condition: Point name or path attribbute: value=*
-- Add an override: Set material type=Material override path=/mat/path_to_your_shader_VOP
-- Add an override script: type=Material Parameter Override name=the name of the parameter you promoted above (tex0 by default) Override type=Attribute binding Value=myFilePath
Then add an attribCreate node to your SOP chain for the packed objects
Make a string attribute myFilePath and set it to whatever file path you want complete with any expressions to offset the frame numbers per point instance. For instance: /path/to/my/image/sequence/myImgSeq.`padzero(4, $F + ($PT*10))`.exr
Offsets each instance by 10 frames.
I'll say that I wish this was easier. It's such a common thing and I still keep this stashed away because everytime I need to do it, I forget.
-- First edit the parameter interface on the RS material builder VOP and drag the filename field from the texture node inside the VOP into the interface, effectively promoting that file path field so it is exposed on the VOP.
-- Then add a new stylesheet and style to your object
-- Add a target: point instances
-- Add a condition: Point name or path attribbute: value=*
-- Add an override: Set material type=Material override path=/mat/path_to_your_shader_VOP
-- Add an override script: type=Material Parameter Override name=the name of the parameter you promoted above (tex0 by default) Override type=Attribute binding Value=myFilePath
Then add an attribCreate node to your SOP chain for the packed objects
Make a string attribute myFilePath and set it to whatever file path you want complete with any expressions to offset the frame numbers per point instance. For instance: /path/to/my/image/sequence/myImgSeq.`padzero(4, $F + ($PT*10))`.exr
Offsets each instance by 10 frames.
I'll say that I wish this was easier. It's such a common thing and I still keep this stashed away because everytime I need to do it, I forget.
Technical Discussion » How to fix KineFX incorrect weights in shallow space mesh?
- made-by-geoff
- 63 posts
- Offline
Before we really dive into ways of fixing this after the capture, have you looked at the output of the tetembed? By default, it expands the incoming mesh and the resulting tet mesh gets overlaps, which is what usually causes the problem in the fingers.
You can dial down the mesh enlargement (all the way to 0 if necessary). You can also try a tet conform instead of the tet embed. Each of these result in slightly different weighting and can impact other parts of the capture, but I can usually avoid the annoying overlaps on the fingers you're getting there which results in less post processing of weights (or at least painting on parts of the mesh that are easier to work on).
Let me know if that works and we can move on to the eyes and other stuff.
In general, I usually remove eyes and any facial bones (other than the jaw) before they go into the bone capture lines, so that they aren't weighted. And then I do a second pass of weighting (usually manually, but it can be procedural) for the facial/eye bones.
Lastly, and I know this is not popular in houdini-land, but I still think weights should be hand painted from scratch (if possible) for hero characters. There's lots of places where I use the biharmonic setup, but for hero characters, I still start with 0 weights on everything and make a big pot of tea and start painting. Takes the good part of a day for a complex character, but at the end I KNOW I have clean weights.
You can dial down the mesh enlargement (all the way to 0 if necessary). You can also try a tet conform instead of the tet embed. Each of these result in slightly different weighting and can impact other parts of the capture, but I can usually avoid the annoying overlaps on the fingers you're getting there which results in less post processing of weights (or at least painting on parts of the mesh that are easier to work on).
Let me know if that works and we can move on to the eyes and other stuff.
In general, I usually remove eyes and any facial bones (other than the jaw) before they go into the bone capture lines, so that they aren't weighted. And then I do a second pass of weighting (usually manually, but it can be procedural) for the facial/eye bones.
Lastly, and I know this is not popular in houdini-land, but I still think weights should be hand painted from scratch (if possible) for hero characters. There's lots of places where I use the biharmonic setup, but for hero characters, I still start with 0 weights on everything and make a big pot of tea and start painting. Takes the good part of a day for a complex character, but at the end I KNOW I have clean weights.
Technical Discussion » How Do I Change The Friction Of The Collider Objects?
- made-by-geoff
- 63 posts
- Offline
friction
https://www.sidefx.com/docs/houdini/vellum/vellumattributes.html [www.sidefx.com]
You can add it to an object or set of points and it acts as a multiplier on the friction settings in the solver. So if the static friction is set to .25 and the attribute on one plane is .1 and on another plane is 10, you'll get .025 and 2.5 as your friction amounts respectively.
Technical Discussion » HDA editing / match type properties and edit parms
- made-by-geoff
- 63 posts
- Offline
Technical Discussion » HDA editing / match type properties and edit parms
- made-by-geoff
- 63 posts
- Offline
I made a bunch of changes to an existing HDA that ships with Houdini thinking it was a one-time thing. But now I realize it would be nice to have around in the future. I made the original changes by editing parameters, but since I didn't use "type properties" even though I've now saved it out as a custom HDA, it loads up with the original parameter interface.
Is there any way to copy across the changes I made to the type properties window? I don't see an option in the drop down, but maybe I'm missing it. If I have to make the changes again manually it's not the end of the world, but I'd prefer a simpler solution. Cheers.
Is there any way to copy across the changes I made to the type properties window? I don't see an option in the drop down, but maybe I'm missing it. If I have to make the changes again manually it's not the end of the world, but I'd prefer a simpler solution. Cheers.
Edited by made-by-geoff - Aug. 21, 2022 09:24:16
Technical Discussion » Radial Basis Function vs biharmonic capture
- made-by-geoff
- 63 posts
- Offline
Rok Andic has been covering his implementation of RBF for pose space deformations in KineFX. The link below is a quick demonstration. If you subscribe to his patreon stream he has some more in depth explanations.
I'll also note that in my experience, most corrective deformations can be done with more simple controls. The most common deformations around knees and shoulders, for instance, often only need a dot product of two bones to drive the deformation. But for areas with multiple blend shape possibilities (especially in areas around the face of characters) RBF allows you to set multiple different "targets" that drive different blendshapes or combination of blendshapes.
https://www.rokandic.com/blog/tag/kinefx [www.rokandic.com]
I'll also note that in my experience, most corrective deformations can be done with more simple controls. The most common deformations around knees and shoulders, for instance, often only need a dot product of two bones to drive the deformation. But for areas with multiple blend shape possibilities (especially in areas around the face of characters) RBF allows you to set multiple different "targets" that drive different blendshapes or combination of blendshapes.
https://www.rokandic.com/blog/tag/kinefx [www.rokandic.com]
Technical Discussion » KineFX - Remove all skin weighting of particular joint(root)
- made-by-geoff
- 63 posts
- Offline
Yeah. You can use the Capture Attribute Unpack node to convert the capture weights into a format that is more easily accessible with VEX. You end up with two long detail arrays that give you the index of each joint. Once you identify the bone's index in the array (use find() to get the index in the array). Then you'll see each point has two point attributes: an index array that shows which joints are influencing it and their relative weights. Search for the index of the root joint from the detail array and set its weights to 0.
https://www.sidefx.com/docs/houdini/nodes/sop/captureattribunpack.html [www.sidefx.com]
It's a little counter-intuitive, and to be honest I still find weighting easier to do by hand with weight painting, but if you can do it. Post a hip file if the documentation isn't clear.
https://www.sidefx.com/docs/houdini/nodes/sop/captureattribunpack.html [www.sidefx.com]
It's a little counter-intuitive, and to be honest I still find weighting easier to do by hand with weight painting, but if you can do it. Post a hip file if the documentation isn't clear.
Edited by made-by-geoff - Aug. 17, 2022 21:29:28
Technical Discussion » Getting point position with parenting
- made-by-geoff
- 63 posts
- Offline
Tomas, thanks for that. Super helpful. And the facing ratio trick: never would have thought of that.
Technical Discussion » Transforming an object with KineFX Skeleton Joint / Point
- made-by-geoff
- 63 posts
- Offline
Normally you would do this with a parent constraint. It lets you blend it on and off (if you character has to, for instance, pick up the glasses (parented to the hand) and then put them on (parented to the head).
https://www.sidefx.com/docs/houdini/character/kinefx/constraints.html [www.sidefx.com]
https://www.sidefx.com/docs/houdini/character/kinefx/constraints.html [www.sidefx.com]
Technical Discussion » Getting point position with parenting
- made-by-geoff
- 63 posts
- Offline
I'm trying to calculate facing ratio at SOP level as an attribute to feed into a shading network for a character (for reasons that don't matter here, I need to do this in SOPs and not in the shader itself). I wrote a basic bit of VEX to calculate the facing ration, but if the object is transformed at the object level or parented, it doesn't work because it's grabbing the point position prior to parenting and object level transformations, which are likely to occur in animation.
I thought I could use ow_space() but that doesn't seem to work (possibly because parent transforms are done after SOP level calculations?). Any way to grab point positions that will take object-level transforms into account?
I thought I could use ow_space() but that doesn't seem to work (possibly because parent transforms are done after SOP level calculations?). Any way to grab point positions that will take object-level transforms into account?
Houdini Lounge » 19.5 installation check sum does not match
- made-by-geoff
- 63 posts
- Offline
Anyone else getting check sum errors on the 19.5 production build when using the launcher? I've gotten them on both Mac and Windows and the install fails. I think I also got them on at least one of the default daily builds but was able to get the 19.5.303 py37 version to install on windows.
-
- Quick Links