return _hou.HDAModule___getattr__(self, name) AttributeError: 'module' object has no attribute 'matchIK'
First off, this error seems to indicate that your button can't find the specified function. In the callback script of the button, you can use
hou.phm()
to access that HDA's Python module, and then call the function like this:
hou.phm().matchIK()
Second, you don't need to use f"{hou.pwd().path()}. The HDA is treated as the parent node of all nodes inside your HDA. That means you can use a simple
hou.node()
command to browse to any node within your HDA. In your compute case that could be either:
compute=hou.node("computerigpose1")
compute=hou.node("./computerigpose1")
If you want to browse into a subnet, you can do that the regular way of going through levels like this:
node=hou.node("subnet1/node1")
This should work the same across Houdini versions. I hope this solves your problem.
You can't export the blended mesh because the Character Blendshapes node removes the relevant data. Instead, you need to feed the animated skeleton with the blendshape channels right into the FBX Character Output. The character's mesh needs to have the blendshapes as a hidden group of primitives and that goes into the first input (mesh) of the FBX Character Output. Houdini 19 offers a more streamlined workflow for this.
Question 1: Unreal supports both joints and blendshapes, so it is up to you to decide what combination you want to use. Joints typically require a bit more setup to create facial animation than blendshapes, so you should ask your riggers/animators what they prefer.
Question 2: You can buy one of the latest iPhones to use Epic's free Unreal Live Face app, which allows you to stream basic facial animation into Unreal. This app uses Apple's predefined facial blendshapes, so the blendshape animations can be applied to any compatible character (such as from Reallusion's Character Creator). Other than that, you could opt for the quite expensive Faceware software called Studio, which basically does the same thing as Unreal's Live Face App, in combination with their hardware setup. This can be streamed into Unreal as well.
The lips are tracked by the software. The teeth shouldn't move because they are attached to bones. The tongue is usually only tracked when it is pushed out. If you want accurate tongue movement, an animator will probably have to manually add it, unless you decide to use lipsync software.
Here is how I got the PyCharm integration working, including autocomplete:
Go to your Python Interpreter settings (in the bottom right, or File>Settings>Project>Python Interpreter).
Click the gear icon in the upper right corner to show all your interpreters.
In the top left corner of the new window, click the "+" button to add a new interpreter.
Pick the System Interpreter and then browse to the path of your Houdini installation that has the Python executable that you want to use (2.7 vs 3.7 for example). The path probably looks something like this: C:\Program Files\Side Effects Software\Houdini 18.x.xxx\python27\python2.7.exe.
After you've pressed OK in the window, you will go back to the window with the list of interpreters. Select your new interpreter and click the 5th button from the top left to "Show paths for the selected interpreter".
By default it should have some paths to libraries already. However, the specific hython libs are probably not in the list, so you need to add that by pressing the "+" in the top left corner. The path should look something like this: C:\Program Files\Side Effects Software\Houdini 18.x.xxx\houdini\python2.7libs (or 3.7libs if you want to use 3.7). These libs will give you the autocomplete functionality for the HOM classes.
Last but not least, for each Houdini Python project you need to set the interpreter to this new interpreter. And in order to get the autocomplete working properly, you need to import the hou module at the start of each file where you want to use it. It is as simple as import hou, and PyCharm should recognize the module from the interpreter.
From there on, you can use the hou module the same way you would inside Houdini, for example:
importhouhou.node("../..")
or
hou.pwd().geometry()
I hope this helps! I couldn't attach pictures because that would be quite a few pictures, so I hope that my instructions are clear enough to follow. If you need any further help, feel free to reach out to me.
Have you tried first removing the capture attributes from the captured mesh? The error says that the weights are already present, so it could be useful to remove the old before applying any new.
KineFX has support for blendshapes. This [www.sidefx.com] is another forum thread with a sample file to show you how to work with blendshapes in KineFX. You can export your character with the face morphs and import them with the FBX Character Import node. The rest geometry will contain the blendshapes, and the animated skeleton contains the blendshape channels that you can use to animate the blendshapes.
For exporting, make sure to export the data without passing the animation through the Character Blendshapes or Bone Deform nodes, as these nodes remove some of the data needed for exporting.
I was wondering if SESI is planning to/working on nodes that can import and export glTF 2.0 files, including animations? This would be very useful for game dev because it supports material connections and it's more optimized than FBX.
Introduction It's my first time posting on the forum, so I will briefly introduce myself. I am a game dev student at Breda University (formerly known as NHTV Breda in the Netherlands). I primarily specialise in motion-capture animation, and for a while now, I have been looking into rigging. Since we got taught Maya in our first year, that is where I started. However, after spending some time in Houdini to create a procedural environment for a school project, I liked Houdini so much that I decided to look much deeper into it. By now I have learned enough that I am rigging all characters for my current project in Houdini.
To learn how to do rigging in Houdini, I followed several tutorials, including 3D Buzz's Technical Rigging and Advanced Character Rigging volumes, as well as Michael Goldfarb's rigging series (at least, the videos that have been released so far). I have made this post to give all my feedback on Goldfarb's series because I think it will be much easier/clearer to understand in one post rather than posting one comment under each individual video. Some of my points are about more intermediate rigging topics so they can be taken with a grain of salt if you think it is too advanced for a beginner tutorial series.
TL;DR Here is a TL;DR version of my major points of feedback: - Parent space blending with the if(ch("../"), 1, 1, 0) condition is a broken/not fully polished feature, which makes it a nightmare to animate with. Sadly, Goldfarb skips the steps to make it work properly (smoothly). - Scripting is an essential part of rigging. This series only briefly explains what the scripts do, and therefore does not teach the student a strong foundation in scripting, because the student can just download the scripts and not learn anything about writing it. - Some of the scripts can be improved. For example, the SplitBone tool doesn't actually use the given input name for the split bones. Strangely enough, the script works correctly in the video. Another example is the TwistExpression tool. This tool is hardcoded, meaning that it expects a certain amount of bones. Perhaps the script is intended for situations with four bones, but it would probably much more useful if it could work with any number of bones. - Optimisation: there is hardly any talk about optimising a rig (this may come in later parts, but I want to mention it anyway).
Alright, let's get deeper into this.
Stepping through parent spaces At some point in the series, Goldfarb implements a parent blend constraint for the IK goal controller. Setting up the constraint is fairly simple until we get to the point where the blends in the constraint need to have the right values. Michael uses an if condition: if (ch(“../") == 1, 1, 0). The problem with this condition is that it makes the blend values either 0 or 1, with no floats in between. This may seem correct for the blends because we don’t want incomplete blends (50% space A and 50% space B), but over time it doesn’t work. Here is what happens:
Let’s say we keyframe the space on frame 1 on “World”. Michael uses 4 possible blend spaces, of which “World” has index 0. This means that on frame 1 only blend1 is 1 and the rest is 0. If we now go to frame 10 and key on “Chest”, which has index 2, blend3 will turn to 1, and the rest, including blend1 for “World”, will go to 0. If we now play this animation, we’ll see that the blends only change on 2 frames in between frame 1 and 10. Around frame 3, the parameter will change from “World” to “COG” (index 0 to index 1 in the menu) and around frame 7 from “COG” to “Chest” (index 2 to index 3). And if we pay careful attention, we’ll see that the blends don’t blend smoothly from 0 to 1 or 1 to 0; instead, they act as if they were animated in stepped mode, which causes them to jump from 0 to 1 or 1 to 0 suddenly. If you were to keyframe the IK goal in different positions on both keyframes, you would notice a shocking move that is the effect of the blends being animated like that. That is where the big problem is; the if-condition will never allow the blends in the constraint to blend smoothly from 0 to 1 or vice versa over multiple frames because it always snaps to either 0 or 1.
So, what could be a solution to this issue? One could be that SESI implements a built-in fix. For example, they could implement a feature that allows the user to keyframe multiple parameters in the HDA by keyframing only a single parameter in the UI. This would enable us to keyframe the blends in the constraint when we keyframe the IK space parameter. And then we can make those keyframes interpolate smoothly. With the if-statement from the series, this is impossible to do.
Conclusion My point of feedback is that Michael didn’t go in-depth enough for this system to work well for an animator. If you follow(ed) the series, implemented it this way and then gave the rig to an animator, he will come back to you within a few minutes to tell you that the feature is broken and it would be easier to animate without it than animating with the function implemented like this. I suggest for a next series to prove that a feature works after implementing it, by testing it over a few frames.
Improving scripts It is pretty sweet that Michael provides everybody with scripts that are free to download. However, I think some of the scripts could use improvement. For example, the SplitBone tool, which is used in the arms and fingers to split the bones into a custom number of bones, has an issue with the naming. When pressing this shelf tool, it shows a pop-up window asking for a name for these new bones. Upon pressing <OK>, it is supposed to name the new bones with whatever the user wrote down. This is bugged, though, because the tool never uses the new name. In the script, one can see that the name of the new bone will always be named "_split", regardless of user input. This is, of course, a simple bug that can easily be fixed so my point of feedback would be to solve that issue. Another example is the TwistExpression tool. This tool is hardcoded, meaning that it expects a certain amount of bones. If you give it a different number of bones, it will show an error. I think this could be improved because it looks like this tool should be usable on any number of bones. Unfortunately, since the script is pretty much entirely hardcoded, it will require an overhaul if it were to work for any amount of bones.
Learning to write code As many riggers will probably agree, writing scripts is an essential part of the rigging process. But how does one learn to do this? As Dr Strange said: “Study and practice.” And that is what I think should happen in a beginner's tutorial. Simple tools that are used countless times, like the FKControl tool, could be used as an excellent example of a script that can be used to learn the foundations. Michael goes over the script briefly, so we have a rough idea of what the code is doing, but I think that it is not enough to learn how to do it yourself. There are tutorials and a library of documentation on basic commands, but I still think it should be included in this series, because it is so focused on introducing new people to rigging in Houdini, and thus it should introduce them to scripting (most likely in Python) as well.
Rig optimisation As mentioned earlier, the FKControl tool is useful to create a controller quickly and is used frequently in the series. The tool creates 3 new nulls and using it over and over again generates a sea of nulls that one could easily get lost in, despite everything being named correctly. Plenty of these nulls exist only because they were created by the FKControl tool, but are never used. It would be nice if there could be some information on how this impacts the performance of the rig. Besides, there is also a lack of in-depth discussion on when it is better to use constraints over relative references or vice versa. Setting up relative references is much quicker than making constraints for every bone in a rig, so there should probably be a discussion about which is better.
To rig for games or not to rig for games There are many posts online in which people wonder how they can accurately export their animation to a game engine. This starts with building a compatible rig. I think that the series does not make clear that this rig is not designed for game animation, so I suggest the description of the series should mention this. It also leads me to a new tutorial request: it would be great if there could be a series that goes deep into how to build rigs for games and export them correctly.
Miscellaneous questions Some general questions that I would like to have an answer to: - Why are the body parts not stored in the HDA right away? - Why does every limb have its own Geo Display option? Once the final character is skinned to the bones, is it possible to hide specific primitives or do you hide the entire skinned geometry? - What could the Python SOP be used for? Since this is cooked on every frame, I suppose it has its uses.