A little hard to tell exactly without a hip file. But I'd guess that the half sphere has something to do with your sphere setup. The alembic you are bringing in defaults to a packed alembic. Instead of loading it as individual points and prims, it gets loaded as a single primitive per object. This helps speed load and playback times, but it means that when houdini looks at it, it doesn't really see a sphere it sees (likely) a single point.
Then you're piping that into a sphere node, which uses its input to set bounding regions. It may be that since it's not reading a full sphere out of the alembic node, it's not creating a proper sphere with the sphere node.
Instead, after the alembic node append a convert node and set it to convert to "polygons". The pipe that directly into the scatter node.
And lastly I'm guessing the smoking isn't rising is because in addition to setting a density attribute, you also need to set a temperature attribute before you rasterize the attributes. No temperature, and there's nothing to drive the movement of the smoke.
Post a hip file if it's still not working.
Found 84 posts.
Search results Show results as topic list.
Technical Discussion » having troubles putting smoke sim on alembic sphere
-
- made-by-geoff
- 84 posts
- Offline
Solaris and Karma » Deadline submitter and plug in
-
- made-by-geoff
- 84 posts
- Offline
It's about time, AWS...
Deadline 10.4.0.10¶
This is a hotfix release to address issues in the Karma plugin and the Repository installer.
Application Plugin Improvements¶
Karma
Added Houdini Karma submitter and plugin.
Fixed the bug where the frames argument was missing from the Karma standalone plugin.
Deadline 10.4.0.10¶
This is a hotfix release to address issues in the Karma plugin and the Repository installer.
Application Plugin Improvements¶
Karma
Added Houdini Karma submitter and plugin.
Fixed the bug where the frames argument was missing from the Karma standalone plugin.
Rigging » APEX dog leg / multiple IKs
-
- made-by-geoff
- 84 posts
- Offline
Thanks Dan, I'll take a look at this as well.
But i also have to say that I tried Esther's suggestion and it's become by main way of building up component scripts. It still feels a little beta-ish understandably, but it really cuts down on the repetitive work of finding and making nodes and connections.
The script below is just a simple test script that adds an additional transform object and parents the root of a joint chain to it. I haven't had time yet to go back and experiment with a more complex component script, but the basic idea should hold.
Just build up your custom script manually inside an edit graph node and then when you've got your tool working, create a component script, and copy your manual setup into a graph::template node. Use the graph::AddSubnet to add the template to the graph. Then the only part you have to redo, is to connect the inputs and outputs and promote any necessary parameters (still annoying, but much easier than rebuilding the tool from scratch with a whole bunch of addNode and findAndConnect nodes...
But i also have to say that I tried Esther's suggestion and it's become by main way of building up component scripts. It still feels a little beta-ish understandably, but it really cuts down on the repetitive work of finding and making nodes and connections.
The script below is just a simple test script that adds an additional transform object and parents the root of a joint chain to it. I haven't had time yet to go back and experiment with a more complex component script, but the basic idea should hold.
Just build up your custom script manually inside an edit graph node and then when you've got your tool working, create a component script, and copy your manual setup into a graph::template node. Use the graph::AddSubnet to add the template to the graph. Then the only part you have to redo, is to connect the inputs and outputs and promote any necessary parameters (still annoying, but much easier than rebuilding the tool from scratch with a whole bunch of addNode and findAndConnect nodes...
Solaris and Karma » No cryptomatte with Render Gallery ?
-
- made-by-geoff
- 84 posts
- Offline
Definitely log a bug for this. The layer data is there. I downloaded your file and if you view the render gallery EXR and switch the viewer to view the Cryptomatte channel, you can clearly see the cryptomattes. But for some reason the cryptomatte node in Nuke isn't reading it properly. There must be a formatting bug in the way its being written into the file.
Technical Discussion » Custom volume AOVs in karma
-
- made-by-geoff
- 84 posts
- Offline
The short answer is that there is a karma aov node you can add to a karma material. It’ll feed any input in as a custom aov.
The long answer is that USD doesn’t support material based aovs. It generally expects either an LPE expression or a primvar. You can add custom aovs using either by using the additional render vars node.
The karma aov node that sidefx built to get around this limitation in the usd specs.
The long answer is that USD doesn’t support material based aovs. It generally expects either an LPE expression or a primvar. You can add custom aovs using either by using the additional render vars node.
The karma aov node that sidefx built to get around this limitation in the usd specs.
Animation » Rigging help
-
- made-by-geoff
- 84 posts
- Offline
Despite my comment above, I'm very optimistic about APEX. By dropping down tags and auto-rig component, I can build up a quick, very usable rig in under 15min. It's kind of mind-blowing.
The problem is that it is still in beta and when you need something that the auto-rigs don't give you, there's a steep difficulty curve from laying down auto-rig nodes to doing anything else.
APEX is still missing a lot of tools to make it easier to build the kind of tools you're talking about. But it's a very cool foundation. It needs time to get out of beta and then I think we'll start to see a lot more user-friendly tools. It's a very strong system to build on.
The problem is that it is still in beta and when you need something that the auto-rigs don't give you, there's a steep difficulty curve from laying down auto-rig nodes to doing anything else.
APEX is still missing a lot of tools to make it easier to build the kind of tools you're talking about. But it's a very cool foundation. It needs time to get out of beta and then I think we'll start to see a lot more user-friendly tools. It's a very strong system to build on.
Rigging » APEX dog leg / multiple IKs
-
- made-by-geoff
- 84 posts
- Offline
Rigging » APEX dog leg / multiple IKs
-
- made-by-geoff
- 84 posts
- Offline
If anyone from Sidefx is reading this, is it possible (in theory) to automate the creation of the component script? What I find most cumbersome at the moment is that you essentially have to build your rig tools twice. It's like writing instructions once in English and then having to do it again in a completely different language, only to recreate what you've already done in testing.
But it seems possible to me (again in theory) that if I could package up what is new from my manual edits, say in a subgraph where I've set all my ins and outs, that there should be a way to generate a component script automagically. For me that would be a game-changer...
But it seems possible to me (again in theory) that if I could package up what is new from my manual edits, say in a subgraph where I've set all my ins and outs, that there should be a way to generate a component script automagically. For me that would be a game-changer...
Rigging » APEX dog leg / multiple IKs
-
- made-by-geoff
- 84 posts
- Offline
If anyone's interested, I spent some time last night and worked up a basic manual set up. Not production-ready, it would need limits on the upper leg control and the various transform blends are there for some options for foot follow down the road. Also still need to convert it into a component script (by far the most tedious part of APEX), but the basic functionality is there.
I'm still interested if there is a more efficient way.
I'm still interested if there is a more efficient way.
Edited by made-by-geoff - Nov. 12, 2024 09:18:09
Rigging » APEX graphs as geometry
-
- made-by-geoff
- 84 posts
- Offline
I'm not an expert in APEX by any stretch. But I've started trying to duplicate my rigs from other programs starting with 20.5. My short answer is that there's no immediate point to it from a user standpoint (or at least I haven't seen one). You're not really going to be moving around the rig graph, unless you want to make fun animated graphs to show off to your friends.
However, what's interesting is that SideFX found a way to treat the process of rigging in a way that is very similar to the way it treats everything else in SOPs: as the manipulation of points, prims, and vertices which all carry various attributes around.
The biggest ah-ha moment for me was beginning to realize that I can store all kinds of important information about the way a rig should behave on the skeleton for that character instead of within the rig itself. Then I can create a series of modular rigging components that read those attributes and can update their behavior based on them.
To me that's kind of mind-blowing. It means that a much larger percentage of a base rig can be set and re-used again and again and again across multiple characters, with a much smaller percentage of the rig needing to be customized for each character. And that within that base rig I can generate a much wider range of behaviors from the same rig, only by changing the attributes I pipe into the rig.
However, what's interesting is that SideFX found a way to treat the process of rigging in a way that is very similar to the way it treats everything else in SOPs: as the manipulation of points, prims, and vertices which all carry various attributes around.
The biggest ah-ha moment for me was beginning to realize that I can store all kinds of important information about the way a rig should behave on the skeleton for that character instead of within the rig itself. Then I can create a series of modular rigging components that read those attributes and can update their behavior based on them.
To me that's kind of mind-blowing. It means that a much larger percentage of a base rig can be set and re-used again and again and again across multiple characters, with a much smaller percentage of the rig needing to be customized for each character. And that within that base rig I can generate a much wider range of behaviors from the same rig, only by changing the attributes I pipe into the rig.
Edited by made-by-geoff - Nov. 11, 2024 20:43:06
Animation » Rigging help
-
- made-by-geoff
- 84 posts
- Offline
iMO in its current state APEX is overkill for this. For previs it will be much faster and easier to figure out to set up a simple kinefx rig. It won’t be fast but less of a headache than trying to learn APEX in beta. As someone learning APEX in beta.
Rigging » APEX dog leg / multiple IKs
-
- made-by-geoff
- 84 posts
- Offline
Any help will certainly be appreciated, but I think I'm leaning toward the idea that I have to dive down into the graph to do this. But maybe someone can let me know if my thinking is right. I'd much rather find a way to set this up with the auto-components.
From what I can see it looks like the logic behind the auto-components is that they take the Guides.skel and generate point transforms to represent any joints found in the guide skeleton. It then looks at the difference between the rest position and post-animation position of those point transforms and applies that back to the Base.skel, which in turn drives the mesh in the bone deform.
Part of the problem then in my set up above is that at some level you have to create the second IK chain (normally for me this would be the 2-bone IK that controls rotation of the upper thigh and knee) from the RESULT of the main spring/multi IK, not from the original guide skeleton.
In kinefx I'm able to create that second IK chain from the post-solve joint positions of the multi-IK. But so far I can't see any way of using the auto-components to calculate things post-multiIK, since from what I can see each auto-component is looking back to the original guide skeleton for its starting position. Am I mistaken about this?
I think I can at least imagine a set up where in the graph, I use the post multi-IK transforms to create new helper point transforms, parent those within the graph and add a new IK solve for those. No idea yet how I go about doing that, and I would really prefer not to, but that's my current plan if no one can suggest something better.
From what I can see it looks like the logic behind the auto-components is that they take the Guides.skel and generate point transforms to represent any joints found in the guide skeleton. It then looks at the difference between the rest position and post-animation position of those point transforms and applies that back to the Base.skel, which in turn drives the mesh in the bone deform.
Part of the problem then in my set up above is that at some level you have to create the second IK chain (normally for me this would be the 2-bone IK that controls rotation of the upper thigh and knee) from the RESULT of the main spring/multi IK, not from the original guide skeleton.
In kinefx I'm able to create that second IK chain from the post-solve joint positions of the multi-IK. But so far I can't see any way of using the auto-components to calculate things post-multiIK, since from what I can see each auto-component is looking back to the original guide skeleton for its starting position. Am I mistaken about this?
I think I can at least imagine a set up where in the graph, I use the post multi-IK transforms to create new helper point transforms, parent those within the graph and add a new IK solve for those. No idea yet how I go about doing that, and I would really prefer not to, but that's my current plan if no one can suggest something better.
Edited by made-by-geoff - Nov. 10, 2024 11:28:19
Rigging » APEX dog leg / multiple IKs
-
- made-by-geoff
- 84 posts
- Offline
I played around with APEX in 20.0 and while it was exciting, it was missing some key functionality at the time and I dropped it for a bit. Some of those issues have been solved in 20.5, so I'm diving back in. Once I move past the basics, I have to say I get totally lost about the best way to proceed. An example:
The normal way I prefer to rig quadruped hind legs is with a double-IK set up. You use a multiIK setup between the hip and the foot and that gives you the basic movement of the leg (seen below). However you don't have any control over the angle of the leg bones. The best way to solve that is to create a second IK set up from the knee down the foot. Then when you parent the root of that second IK set up to the hip, when you rotate the hip, you get nice control over the rotations of the leg bones. Hopefully my sketch below makes sense.
I know there's other ways to rig a quad leg that would be easier to set up in APEX, but this is our preferred set up and also, I think it I could solve this, I'd have a better understanding of how to customize APEX rigs.
I tried just adding 2 autorig components to the guide skeleton without any modifications, but they seem to conflict and the whole rig either breaks or collapses. I don't know if I should be setting up additional helper joints in the guide skeleton and parenting them up there, or if I can only do this by diving down into the graph with some component script (which I still don't understand very well).
Any thoughts about the best way to approach this would be a big help. At the moment I'm just not sure which direction to even start in.
The normal way I prefer to rig quadruped hind legs is with a double-IK set up. You use a multiIK setup between the hip and the foot and that gives you the basic movement of the leg (seen below). However you don't have any control over the angle of the leg bones. The best way to solve that is to create a second IK set up from the knee down the foot. Then when you parent the root of that second IK set up to the hip, when you rotate the hip, you get nice control over the rotations of the leg bones. Hopefully my sketch below makes sense.
I know there's other ways to rig a quad leg that would be easier to set up in APEX, but this is our preferred set up and also, I think it I could solve this, I'd have a better understanding of how to customize APEX rigs.
I tried just adding 2 autorig components to the guide skeleton without any modifications, but they seem to conflict and the whole rig either breaks or collapses. I don't know if I should be setting up additional helper joints in the guide skeleton and parenting them up there, or if I can only do this by diving down into the graph with some component script (which I still don't understand very well).
Any thoughts about the best way to approach this would be a big help. At the moment I'm just not sure which direction to even start in.
Edited by made-by-geoff - Nov. 9, 2024 10:40:48
Technical Discussion » Vellum collision detection
-
- made-by-geoff
- 84 posts
- Offline
Technical Discussion » Vellum collision detection
-
- made-by-geoff
- 84 posts
- Offline
I can see I'm not the first one to ask this, but I haven't seen a good answer anywhere.
I've got a vellum setup with multiple colliders and need to track particle age off collision with one particular collider. But none of the collision attributes seem to update. Hitnum remains at 0 regardless of collisions.
I've tried both SOP solver and DOPs, and both seem to be the same.
Anyone found a way around this?
Aside from the more immediate question, is there also a way to record which object collided so I can chart time based off that collision? Is there no hitpath in vellum?
I've got a vellum setup with multiple colliders and need to track particle age off collision with one particular collider. But none of the collision attributes seem to update. Hitnum remains at 0 regardless of collisions.
I've tried both SOP solver and DOPs, and both seem to be the same.
Anyone found a way around this?
Aside from the more immediate question, is there also a way to record which object collided so I can chart time based off that collision? Is there no hitpath in vellum?
Edited by made-by-geoff - Sept. 19, 2024 21:42:09
Solaris and Karma » List of render vars?
-
- made-by-geoff
- 84 posts
- Offline
Is there a list of the raw render var source names?
ray:hitP
ray:hitN
ray:hitN_camera
Is there any way to author new sources? Or are they set by the renderer?
ray:hitP
ray:hitN
ray:hitN_camera
Is there any way to author new sources? Or are they set by the renderer?
Solaris and Karma » Karma AOVs won't work inside a subnet (and therefore HDA)
-
- made-by-geoff
- 84 posts
- Offline
For anyone else searching...
I couldn't find a way to do it with references and overrides, but I found this handy bit of code here [www.sidefx.com]:
You'd probably want to adapt this with an interface for the prim pattern, but right now it runs over all prims and swaps out the given surface shader in any material.
I couldn't find a way to do it with references and overrides, but I found this handy bit of code here [www.sidefx.com]:
You'd probably want to adapt this with an interface for the prim pattern, but right now it runs over all prims and swaps out the given surface shader in any material.
from pxr import UsdShade node = hou.pwd() ls = hou.LopSelectionRule() primpattern = "*" if primpattern != "": ls.setPathPattern('%matfromgeo({})'.format(primpattern)) paths = ls.expandedPaths(node.inputs()[0]) stage = node.editableStage() for path in paths: basemtl = UsdShade.Material.Get(stage, path) utilShader = UsdShade.Shader.Get(stage, '/materials/karmtl_util/util') for output in basemtl.GetSurfaceOutputs(): if "outputs:kma:surface" in output.GetFullName(): output.ConnectToSource(utilShader.ConnectableAPI(), "surface")
Solaris and Karma » Karma AOVs won't work inside a subnet (and therefore HDA)
-
- made-by-geoff
- 84 posts
- Offline
Maybe I should back up here...
We're trying to explore introducing Karma as a production renderer and I'm trying to figure out how to build a set up for rendering out separate Beauty and Utilities passes, where the Utilities passes contain a number of custom AOVs that remain consistent across a given project. These AOVs are a mix of AOVs that are generated from geometry attributes/primvars, some that are material-based, and a number that are included in the standard Karma AOVs.
The problem(s) I keep running into is: 1. USD doesn't seem to recognize material-based AOVs as a concept and so a lot this feels very hack-y 2. Karma (as far as I can tell) doesn't support scene-wide AOV material overrides.
Here's what I've explored:
1. Use the Karma AOV node as a way around the lack of material-based AOVs in USD. This works, but would require a fairly large network to be included in every material in a scene. To make matters worse, you can't roll that network up into a subnet/hda or the Karma AOV nodes stop working (see post above). I also don't love that this requires me to definite AOVs in 2 places, once in the material network and again in the render settings (but I can live with that). The bigger problem is the inability to roll this up into something easy to lay down and repeat and the general unwieldiness of it all.
2. I would prefer a setup where I can globally override all the materials in a scene with a standard AOV material and separate render settings where I can set all my standard and custom AOVs. The problem here is that this would require an override for just the surface shader, not the displacement settings and any per-material render properties.
-- 2a. I can do this per material with a edit material network node, but not globally.
-- 2b. I could try a sublayer to override the materials, but this would override the entire material, not just the surface shader settings and I lose all my displacement settings.
Anyone using Karma in a studio setting and figured out a way to get around this?
We're trying to explore introducing Karma as a production renderer and I'm trying to figure out how to build a set up for rendering out separate Beauty and Utilities passes, where the Utilities passes contain a number of custom AOVs that remain consistent across a given project. These AOVs are a mix of AOVs that are generated from geometry attributes/primvars, some that are material-based, and a number that are included in the standard Karma AOVs.
The problem(s) I keep running into is: 1. USD doesn't seem to recognize material-based AOVs as a concept and so a lot this feels very hack-y 2. Karma (as far as I can tell) doesn't support scene-wide AOV material overrides.
Here's what I've explored:
1. Use the Karma AOV node as a way around the lack of material-based AOVs in USD. This works, but would require a fairly large network to be included in every material in a scene. To make matters worse, you can't roll that network up into a subnet/hda or the Karma AOV nodes stop working (see post above). I also don't love that this requires me to definite AOVs in 2 places, once in the material network and again in the render settings (but I can live with that). The bigger problem is the inability to roll this up into something easy to lay down and repeat and the general unwieldiness of it all.
2. I would prefer a setup where I can globally override all the materials in a scene with a standard AOV material and separate render settings where I can set all my standard and custom AOVs. The problem here is that this would require an override for just the surface shader, not the displacement settings and any per-material render properties.
-- 2a. I can do this per material with a edit material network node, but not globally.
-- 2b. I could try a sublayer to override the materials, but this would override the entire material, not just the surface shader settings and I lose all my displacement settings.
Anyone using Karma in a studio setting and figured out a way to get around this?
Edited by made-by-geoff - Aug. 6, 2024 08:56:51
Solaris and Karma » Karma AOVs won't work inside a subnet (and therefore HDA)
-
- made-by-geoff
- 84 posts
- Offline
Trying to build out a set of material-based AOVs. Ultimately I'd like this to live as an HDA, so I can drop down a complete set of AOVs quickly.
I set them up with the karma AOV VOP, which works fine... until I drop them into a subnet. Once I do that, it seems that whatever the karma AOV node is doing under the hood to create the Render Vars breaks and my AOVs nolonger work.
There's not a lot of documentation on exactly what the Karma AOV node is doing. Anyone have an ideas or why it would stop working from within a subnet?
The project below shows the problem. If you switch between the outputs of the Karma AOV node and subnet, which have the Karma AOV node inside, the Render Vars appear and disappear.
I set them up with the karma AOV VOP, which works fine... until I drop them into a subnet. Once I do that, it seems that whatever the karma AOV node is doing under the hood to create the Render Vars breaks and my AOVs nolonger work.
There's not a lot of documentation on exactly what the Karma AOV node is doing. Anyone have an ideas or why it would stop working from within a subnet?
The project below shows the problem. If you switch between the outputs of the Karma AOV node and subnet, which have the Karma AOV node inside, the Render Vars appear and disappear.
Technical Discussion » Parting lines for hair
-
- made-by-geoff
- 84 posts
- Offline
I think splitting your groom into sections is almost always a good idea and can help with parts, but the partition tool is still useful to know how to use and depending on what you want to do, can give you looks that are difficult to achieve with separate grooms.
The documentation isn't great on the tool, but I'm pretty sure that what is happening under the hood is that the partition tool is affecting the guide interpolation of the hairs. So it isn't meant to alter the guides themselves. If you apply it only to a guide groom alone, you won't notice much of a difference. It doesn't make an obvious part in the guides all by itself.
However, if you have guides that have been groomed with other tools to create a noticeable change in direction, normally when you add a hair generate node, it will try to interpolate through the change in direction as it adds hairs in between the guides. You'll get a part, but if your hairs are a lot denser than your guides they'll smooth out the part created by the change in direction of the guides.
The guide partition forces the hair generate to perform a more abrupt shift in direction as it interpolates through the guides across any lines of partition you create with the partition node, resulting in a more obvious part. Again, this won't be very noticeable in the guides, but once you generate hairs from them, it does create a noticeable affect.
Below I'm showing the resulting hair generate with and without the guide partition on the preceding guide groom.
It's not a perfect analogy, but I think of it like a crease line in hard surface rendering. Normally your renderer uses your normals to smoothly interpolate across what are, in reality, faceted polygons to create a smoothly curved surface. But if you add a crease line, the renderer knows to interpolate the normals across that line in such a way that you get a more abrupt change and an apparent sharper edge long that line.
Now can you split your groom? Absolutely. But now you've lost the ability to interpolate across the part. This can be a pain if you have areas where your part gradually fades out into an area without a part or with multiple parts (like as a part reaches the whorl).
The documentation isn't great on the tool, but I'm pretty sure that what is happening under the hood is that the partition tool is affecting the guide interpolation of the hairs. So it isn't meant to alter the guides themselves. If you apply it only to a guide groom alone, you won't notice much of a difference. It doesn't make an obvious part in the guides all by itself.
However, if you have guides that have been groomed with other tools to create a noticeable change in direction, normally when you add a hair generate node, it will try to interpolate through the change in direction as it adds hairs in between the guides. You'll get a part, but if your hairs are a lot denser than your guides they'll smooth out the part created by the change in direction of the guides.
The guide partition forces the hair generate to perform a more abrupt shift in direction as it interpolates through the guides across any lines of partition you create with the partition node, resulting in a more obvious part. Again, this won't be very noticeable in the guides, but once you generate hairs from them, it does create a noticeable affect.
Below I'm showing the resulting hair generate with and without the guide partition on the preceding guide groom.
It's not a perfect analogy, but I think of it like a crease line in hard surface rendering. Normally your renderer uses your normals to smoothly interpolate across what are, in reality, faceted polygons to create a smoothly curved surface. But if you add a crease line, the renderer knows to interpolate the normals across that line in such a way that you get a more abrupt change and an apparent sharper edge long that line.
Now can you split your groom? Absolutely. But now you've lost the ability to interpolate across the part. This can be a pain if you have areas where your part gradually fades out into an area without a part or with multiple parts (like as a part reaches the whorl).
Edited by made-by-geoff - June 22, 2024 13:25:18
-
- Quick Links