In general, as long as color management is set up correctly, you only need to properly interpret your textures when you bring them in.
In your material, in whatever texture node you are using there should be an option to set the source colorspace. The "automatic" default usually does a decent job, but you'll want to set it accordingly. Assuming you're bringing in sRGB images, set it to sRGB - texture or whatever your source color space is.
There's also an "ocio colorspace transform" node if you want to explicitly do the conversion in a dedicated node. Generally you'll br converting sRGB textures to ACEScg, but that really depends on how you've set up your color pipeline.
Found 63 posts.
Search results Show results as topic list.
Technical Discussion » Using texture maps with ACES?
- made-by-geoff
- 63 posts
- Online
Rigging » Quad rigging in APEX
- made-by-geoff
- 63 posts
- Online
From what I can tell from the Chicken rig, Walter is using a basic 2-bone IK to drive the hip-knee-ankle portion of the leg and then driving the ankle_ctrl with a parenting / lookat setup based on the ball_ctrl. So the ball_ctrl drives the ankle_ctrl, which is the target of a basic 2-bone IK.
Like a lot of things in the APEX beta, it will work, but it's a bit of a workaround. There are other rigging set-ups that require 3-bone spring IK setups that are more elegant and produce more natural results. Which is why I was asking, but thanks for the responses. Looking forward to future releases.
Like a lot of things in the APEX beta, it will work, but it's a bit of a workaround. There are other rigging set-ups that require 3-bone spring IK setups that are more elegant and produce more natural results. Which is why I was asking, but thanks for the responses. Looking forward to future releases.
Rigging » Quad rigging in APEX
- made-by-geoff
- 63 posts
- Online
Been diving into APEX, however, most of my rigging tends to be quadruped rigging and I've hit a bit of a wall.
Haven't had as much time as I would like to expllre, but some cursory experiments seem to show that currently neither the auto-rig components nor the APEX IK solvers can handle 3-bone IK. Anyone found any way to do quadruped rigging with the current APEX nodes?
Or have suggestions about how to build a spring solver from scratch? I'd be willing to put the time in to crafting it from scratch if I understood the underlying math better. Any help would be appreciated.
Haven't had as much time as I would like to expllre, but some cursory experiments seem to show that currently neither the auto-rig components nor the APEX IK solvers can handle 3-bone IK. Anyone found any way to do quadruped rigging with the current APEX nodes?
Or have suggestions about how to build a spring solver from scratch? I'd be willing to put the time in to crafting it from scratch if I understood the underlying math better. Any help would be appreciated.
Technical Discussion » orient joints strange behavior
- made-by-geoff
- 63 posts
- Online
I'm not getting the weird offsets of the handles and joints when I open your file. Might be a bug on your system?
As for the orient joints SOP. One thing I noticed is that you are trying to use a single node for the whole skeleton, which doesn't work well on joints with multiple children. When left to consider multiple children, the node will take the average of all children to orient the parent, which is generally not what you want.
Generally you want to use one node per chain and use the "orient group" and "target group" to limit the joints that are being orients. So generally I'll set the root/COG/hip manually. Then use one node to orient the spine/torso, one for the legs, and one for the arms.
Lastly, I've said it before, but I only use procedural set up for joint orientation (and joint weighting) for BG characters and crowds. Hero characters really are better done manually. There's just too much that can go wrong down the line. Just my 2 cents.
As for the orient joints SOP. One thing I noticed is that you are trying to use a single node for the whole skeleton, which doesn't work well on joints with multiple children. When left to consider multiple children, the node will take the average of all children to orient the parent, which is generally not what you want.
Generally you want to use one node per chain and use the "orient group" and "target group" to limit the joints that are being orients. So generally I'll set the root/COG/hip manually. Then use one node to orient the spine/torso, one for the legs, and one for the arms.
Lastly, I've said it before, but I only use procedural set up for joint orientation (and joint weighting) for BG characters and crowds. Hero characters really are better done manually. There's just too much that can go wrong down the line. Just my 2 cents.
Technical Discussion » Fluid Simulation Retime
- made-by-geoff
- 63 posts
- Online
There's a few ways you could try it:
If I was starting from scratch, I probably would have started with a small ocean for the base water instead of a full FLIP sim. The ocean spectra are faster and easier to set up and much easier to loop and tile. And then you could have added your wave and mist as a smaller, more discrete simulation on top of that. This is old, but shows how to set something like that up: https://www.youtube.com/watch?v=fx8Y9NwBkTI [www.youtube.com]
But given that you've already got something that the client likes, I'm guessing you don't want to start over. Building the loop in comp isn't an option? Going to be much easier than doing the loop in CG.
Lastly, if you're stuck doing it in CG, use a time shift to offset a second version of your cache and then blend between the two offset sims. The basic technique is something like this:
https://www.youtube.com/watch?v=zYBSB1AUibQ&t=3s [www.youtube.com]
You can add additional noises and random attributes to change the way points are deleted and how the sims blend between one another.
If I was starting from scratch, I probably would have started with a small ocean for the base water instead of a full FLIP sim. The ocean spectra are faster and easier to set up and much easier to loop and tile. And then you could have added your wave and mist as a smaller, more discrete simulation on top of that. This is old, but shows how to set something like that up: https://www.youtube.com/watch?v=fx8Y9NwBkTI [www.youtube.com]
But given that you've already got something that the client likes, I'm guessing you don't want to start over. Building the loop in comp isn't an option? Going to be much easier than doing the loop in CG.
Lastly, if you're stuck doing it in CG, use a time shift to offset a second version of your cache and then blend between the two offset sims. The basic technique is something like this:
https://www.youtube.com/watch?v=zYBSB1AUibQ&t=3s [www.youtube.com]
You can add additional noises and random attributes to change the way points are deleted and how the sims blend between one another.
Technical Discussion » Compute rig pose errors
- made-by-geoff
- 63 posts
- Online
Nevermind. It was simple, but annoying.
Sidefx has changed the default value for the "Operator Path" field from ".." to "."
So in the context of wrapping it in an HDA, you'll want to switch it back to promote parameters up one level to the visible layer of your HDA.
Sidefx has changed the default value for the "Operator Path" field from ".." to "."
So in the context of wrapping it in an HDA, you'll want to switch it back to promote parameters up one level to the visible layer of your HDA.
Technical Discussion » Compute rig pose errors
- made-by-geoff
- 63 posts
- Online
Been a while since I've set up Kinefx HDAs, but I remember just laying down a compute rig pose node inside an HDA and setting the default state of the HDA to "kinefx__rigpose" and promoting the parameters on the compute rig pose node. That would give me the ability to select joints and have the transform parms show up on the HDA.
But in 19.5 when I lay down the compute rig pose node and promote parameters, I get the following error:
Traceback (most recent call last):
File "kinefx::Sop/computerigpose/promoteparms", line 2, in <module>
File "/Applications/Houdini/Houdini19.5.640/Frameworks/Houdini.framework/Versions/19.5/Resources/packages/kinefx/python3.9libs/kinefx/poseparms.py", line 1207, in promoteComputeRigPoseParms
appendPoseParms(t_node, save_to_definition, parmname)
File "/Applications/Houdini/Houdini19.5.640/Frameworks/Houdini.framework/Versions/19.5/Resources/packages/kinefx/python3.9libs/kinefx/poseparms.py", line 1200, in appendPoseParms
node.setParmTemplateGroup(ptg)
File "/Applications/Houdini/Houdini19.5.640/Frameworks/Houdini.framework/Versions/19.5/Resources/houdini/python3.9libs/hou.py", line 15802, in setParmTemplateGroup
return _hou.Node_setParmTemplateGroup(self, parm_template_group, rename_conflicting_parms)
hou.OperationFailed: The attempted operation failed.
Parameter name 'enable#' is invalid or already exists
Any ideas. I'm probably forgetting something simple.
But in 19.5 when I lay down the compute rig pose node and promote parameters, I get the following error:
Traceback (most recent call last):
File "kinefx::Sop/computerigpose/promoteparms", line 2, in <module>
File "/Applications/Houdini/Houdini19.5.640/Frameworks/Houdini.framework/Versions/19.5/Resources/packages/kinefx/python3.9libs/kinefx/poseparms.py", line 1207, in promoteComputeRigPoseParms
appendPoseParms(t_node, save_to_definition, parmname)
File "/Applications/Houdini/Houdini19.5.640/Frameworks/Houdini.framework/Versions/19.5/Resources/packages/kinefx/python3.9libs/kinefx/poseparms.py", line 1200, in appendPoseParms
node.setParmTemplateGroup(ptg)
File "/Applications/Houdini/Houdini19.5.640/Frameworks/Houdini.framework/Versions/19.5/Resources/houdini/python3.9libs/hou.py", line 15802, in setParmTemplateGroup
return _hou.Node_setParmTemplateGroup(self, parm_template_group, rename_conflicting_parms)
hou.OperationFailed: The attempted operation failed.
Parameter name 'enable#' is invalid or already exists
Any ideas. I'm probably forgetting something simple.
Technical Discussion » Vellum with multi solver ?
- made-by-geoff
- 63 posts
- Online
Yes, unfortunately this is a limitation. RBD / Vellum solvers can only solve one-way interactions. I am not aware of any way around that. As the post above notes there are ways to hack vellum to do what you want, either packing grains or using the matchshape type in place of RBD, so you can keep everything in vellum.
I'm sure this is something that SideFX is working on, but there will always be a limit to interactions in their current form since the solvers rely on such different methods of calculating their changes over time.
I'm sure this is something that SideFX is working on, but there will always be a limit to interactions in their current form since the solvers rely on such different methods of calculating their changes over time.
Houdini Lounge » How to work with freelancers with Indie licenses?
- made-by-geoff
- 63 posts
- Online
Yeah, that is what I've always done when I was a freelancer:
I kept an indie license for myself, since I met the requirements. I used that for all self-contained projects done by myself. If I was working with a larger studio in their pipeline, I either used their VMs and a license that they owned or I rented an FX license for the duration of the project.
There's no way to use an indie license as part of a larger pipeline.
I kept an indie license for myself, since I met the requirements. I used that for all self-contained projects done by myself. If I was working with a larger studio in their pipeline, I either used their VMs and a license that they owned or I rented an FX license for the duration of the project.
There's no way to use an indie license as part of a larger pipeline.
Technical Discussion » Exposing visualizers in HDA
- made-by-geoff
- 63 posts
- Online
OK. Just had to spend a little time with the explanation here:
https://www.sidefx.com/forum/topic/54063/ [www.sidefx.com]
Pretty simple: Just insert the code into the scripts panel, under the "On Created" event handler.
The visualizer is created when the HDA is dropped down.
https://www.sidefx.com/forum/topic/54063/ [www.sidefx.com]
Pretty simple: Just insert the code into the scripts panel, under the "On Created" event handler.
node = kwargs['node'] vis = hou.viewportVisualizers.createVisualizer(hou.viewportVisualizers.type('vis_marker'), hou.viewportVisualizerCategory.Node, node) vis.setName(node.name()) vis.setIsActive(True, None) vis.setParm('style', 3) vis.setParm('attrib', 'v')
The visualizer is created when the HDA is dropped down.
Technical Discussion » Exposing visualizers in HDA
- made-by-geoff
- 63 posts
- Online
I've got an HDA with a few ramped attribute visualizers that I'd like to expose and be able to turn on and off at the level of the HDA. But I can't seems to figure out how to make them visible outside the HDA.
I saw some suggestions about the OnCreate python script, but couldn't figure out from the description what to do. My python skills are pretty low.
Any suggestions would be helpful.
I saw some suggestions about the OnCreate python script, but couldn't figure out from the description what to do. My python skills are pretty low.
Any suggestions would be helpful.
Technical Discussion » Is it possible to export KineFX controls into maya?
- made-by-geoff
- 63 posts
- Online
It's a pretty vague question. The short answer is "not really".
There's no out of the box way to export rigs between DCCs other than FBX. You can convert a KineFX rig into an FBX file, but it's tricky to get something usable in another package.
The long answer is "yes, but it's gonna take a lot of work, and depends on what you want to do."
KineFX is nothing more that points and prims with an implied hierarchy. So you can definitely move that data to other programs. I haven't done it with Maya, but I have with Blender. And I only did it for very specific cases. It involves a lot of custom scripting to convert the data from one package to another. Usually it's not worth the effort.
In general, rigs are specific to the program they are created in. That's (mostly) true for all DCCs.
There's no out of the box way to export rigs between DCCs other than FBX. You can convert a KineFX rig into an FBX file, but it's tricky to get something usable in another package.
The long answer is "yes, but it's gonna take a lot of work, and depends on what you want to do."
KineFX is nothing more that points and prims with an implied hierarchy. So you can definitely move that data to other programs. I haven't done it with Maya, but I have with Blender. And I only did it for very specific cases. It involves a lot of custom scripting to convert the data from one package to another. Usually it's not worth the effort.
In general, rigs are specific to the program they are created in. That's (mostly) true for all DCCs.
Edited by made-by-geoff - Feb. 24, 2023 22:49:14
Technical Discussion » How to fade in smoke?
- made-by-geoff
- 63 posts
- Online
I'd try to do this in shading rather than my sim. Get the pyro sim you're happy with and then animate the density in the shader to fade it in. It'll be faster to iterate through timing and speed variations.
Technical Discussion » Cloth going through the Mesh ?
- made-by-geoff
- 63 posts
- Online
Mesh resolution, bend stiffness, and compression stiffness are the main parameters that affect how cloth behaves. Stretch stiffness also makes a difference, but less so that the others for typical fabrics like cotton, denim, leather, etc. If you are trying to simulate stretchy fabric, of course, it's a different story.
More recent versions of vellum do a better job working independently of mesh resolution. A medium resolution mesh should behave "similarly" to high resolution meshes, but will lack finer detail and wrinkling. But mesh resolution will definitely create an upper limit on how detailed your wrinkles can get.
This is a good starting point for understanding how different settings affect the look of different types of fabric:
https://www.sidefx.com/tutorials/h17-vellum-cloth-lookdev-tips/ [www.sidefx.com]
More recent versions of vellum do a better job working independently of mesh resolution. A medium resolution mesh should behave "similarly" to high resolution meshes, but will lack finer detail and wrinkling. But mesh resolution will definitely create an upper limit on how detailed your wrinkles can get.
This is a good starting point for understanding how different settings affect the look of different types of fabric:
https://www.sidefx.com/tutorials/h17-vellum-cloth-lookdev-tips/ [www.sidefx.com]
Technical Discussion » Guide Groom node not procedural?
- made-by-geoff
- 63 posts
- Online
I also try to use the guide curve advect tools as much as possible. I find I can do a lot of intricate shaping that stays more procedural. Generally I'll only do manual grooming as the very, very, very last step for really fine details, after I've gotten approval on the overall groom.
And although it doesn't help directly, I always try to break my groom up into multiple, logical patches. I might have separate patches for the sides, back, top, and hero locks in the front. For a furred creature I did recently I had a dozen different grooms for different areas of the body. That way at least you're losing less anytime you have to update.
And although it doesn't help directly, I always try to break my groom up into multiple, logical patches. I might have separate patches for the sides, back, top, and hero locks in the front. For a furred creature I did recently I had a dozen different grooms for different areas of the body. That way at least you're losing less anytime you have to update.
Technical Discussion » Cloth going through the Mesh ?
- made-by-geoff
- 63 posts
- Online
Posting a hip file would help, but you can tell from the jittering all along the bottom of your mesh that vellum is fighting against itself. That type of jittering is usually caused by settings that are "un-convergeable". For instance, if you set your bend and compression stiffness too high and then try to get a piece of cloth to fall to the floor in a heap, you may get that kind of jittering because the cloth wants to compress into a smaller space, but you've given it a high bend and compression stiffness that is preventing that from happening. The result is jitter.
If I had to guess, I'd say there is something wrong with your invisible collider that is causing both the jitter and the inter-penetration of the top of the cloth. But hard to tell from just a screen capture.
If I had to guess, I'd say there is something wrong with your invisible collider that is causing both the jitter and the inter-penetration of the top of the cloth. But hard to tell from just a screen capture.
Technical Discussion » KINEFX Hierarchy "Path" Attribute
- made-by-geoff
- 63 posts
- Online
I'm not in front of my workstation at the moment, but if you load up the kinefx modules in VEX (start your code with):
There's a command called getancestors() that lets you find the point numbers of all ancestors of the input pt number.
From there you can build your string by modifying Tomas' script above.
It's overkill in this case, but the benefit is that it will work in other setups where you have branching.
#include <kinefx.h>
There's a command called getancestors() that lets you find the point numbers of all ancestors of the input pt number.
int[] getancestors(int geo; int pt; int maxdepth)
From there you can build your string by modifying Tomas' script above.
It's overkill in this case, but the benefit is that it will work in other setups where you have branching.
Houdini Lounge » Chat GPT and Houdini
- made-by-geoff
- 63 posts
- Online
tamte
that's what I like about ChatGPT, it's like talking to a knowledgeable person, but you should always take the answers with a grain of salt, whether it's a person or AI
I'll say I've been using chatGPT and Midjourney a lot recently and I find them both very useful tools. But I find the anthropomorphic allusions not particularly helpful. I'm not sure I'd even go so far as to call them "semi-intellectual".
It is amazing how far the algorithms have come so quickly and we can debate over beers how much further they can go in their current construction, but I think it's more helpful to think of them as information synthesizers, not intellects. That's the idea behind "Google on steriods." You ask Google a question and it gives you a 1000 possible links. ChatGPT takes the top 50 links and synthesizes their content into a single coherent response (with a gloss of conversational language). Very useful, but also limited.
That's why both platforms are vastly more useful in broad, well-documented queries. "How do I simulate clouds in VEX" (ChatGPT) or "Determined mice magicians vie against robot cats for control of the Martian landscape (saw that the other day on Midjourney..." brilliant!). But refining and art-directing the Midjourney results or nudging the queries into un-charted territory can be a painfully frustrating experience. Midjourney makes pulling swipe for look books much faster and easier, but I'm not worried about it replacing concept artists any time soon.
To me the ethical question in any near future is about assuring attribution to the people these platforms are drawing from-- not the imminent end of human creativity, let alone the rise of the machines.
Technical Discussion » Have a problem with IK chain
- made-by-geoff
- 63 posts
- Online
From the video it looks as if the IK chain is working properly. The change of orientation when you move the knees is a function of all IK chains. They need a pole vector to tell them in which direction to solve. That is why you would normally offset the knee control / twist control joints (normally forward) in the direction you want the IK joint to solve.
You can do some optional things to help maintain the orientation. A lot of times I'll offset the knee joint forward and then parent it to the ankle. That usually helps it stay forward of the ankle and keep a consistent orientation. But in complicated movements you often have to animate the knee to maintain the proper orientation.
You can do some optional things to help maintain the orientation. A lot of times I'll offset the knee joint forward and then parent it to the ankle. That usually helps it stay forward of the ankle and keep a consistent orientation. But in complicated movements you often have to animate the knee to maintain the proper orientation.
Houdini Lounge » How to modify attributes of weights through Vop
- made-by-geoff
- 63 posts
- Online
Use a Capture Attribute Unpack SOP first to get attributes you can work on. It unpacks and reorganizes the capture data and creates a group of detail and point attributes that together define the joint name, a joint index, and the corresponding weight for each joint.
You can then work on the weights normally in vex or a VOP. When you're done, use a Capture Attribute Pack to repack the attributes.
https://www.sidefx.com/docs/houdini/nodes/sop/captureattribunpack.html [www.sidefx.com]
You can then work on the weights normally in vex or a VOP. When you're done, use a Capture Attribute Pack to repack the attributes.
https://www.sidefx.com/docs/houdini/nodes/sop/captureattribunpack.html [www.sidefx.com]
-
- Quick Links