Update: the help card for the Top Fetch node notes that it triggers a separate process to run the fetched top network, which explains why I see work being done but no output.
I found an "in process scheduler", which seems like a promising way to keep everything running as a single thread within my main hip file, but using this seems to cause the top fetch to fail.
I'm aware that the way I'm thinking about all this is very ROP-like, but in my case I want to trigger a fairly complex scene graph in a very specific way, and having feedback printed to a shell/console helps me understand what's happening when.
Found 206 posts.
Search results Show results as topic list.
PDG/TOPs » One topnet calling another
- dhemberg
- 207 posts
- Offline
PDG/TOPs » One topnet calling another
- dhemberg
- 207 posts
- Offline
Hi;
I have a TOPnet (let's call it Tops_1) in my scene file whose job it is to wedge and cook some geometry. I use a few Python Script nodes in various places in this top graph to echo some status messages to the console as the tree cooks, sort of as a status updater. This works great.
Then, I have a separate topnet (say, Tops_2) that I'm using to more broadly do some tasks like trigger a few different render passes, do a composite, make a movie file, etc.
I would like to have a way to call Tops_1 from Tops_2...to bake my geometry before doing my render. I can do this using a TOP fetch, and it works, but I no longer see my python output status.
The various settings for in and out of process cooking leave me a little confused as how to best configure this. Is it possible? Or am I misunderstanding how these are meant to work? At the moment I would be happy just doing everything as one sequence of processes, rather than introducing any parallel processing.
I have a TOPnet (let's call it Tops_1) in my scene file whose job it is to wedge and cook some geometry. I use a few Python Script nodes in various places in this top graph to echo some status messages to the console as the tree cooks, sort of as a status updater. This works great.
Then, I have a separate topnet (say, Tops_2) that I'm using to more broadly do some tasks like trigger a few different render passes, do a composite, make a movie file, etc.
I would like to have a way to call Tops_1 from Tops_2...to bake my geometry before doing my render. I can do this using a TOP fetch, and it works, but I no longer see my python output status.
The various settings for in and out of process cooking leave me a little confused as how to best configure this. Is it possible? Or am I misunderstanding how these are meant to work? At the moment I would be happy just doing everything as one sequence of processes, rather than introducing any parallel processing.
Solaris and Karma » Karma XPU failure on 3090ti
- dhemberg
- 207 posts
- Offline
Hm, unfortunately neither of these things gets me unstuck, and I'm left still not understanding where I might be going of the rails with my scene (though I am grateful for the reply!)
I have a vague awareness that USD can rapidly create complexity via time samples if one isn't careful. So far, I haven't been particularly careful about the use of wrangles and python nodes in my scene, and I see several of them have little clock icons next to them, despite the intention of the code within to just do something once (i.e. it is not my intent to animate a camera position, though I am using a wrangle to set it).
How can I avoid creating time samples via the use of parameter expressions and wrangles? I can just use a timeShift at the end of everything to effectively kill all animation, but this seems blunt and I WOULD like to animate part of my scene. What are ways I can control this more thoughtfully?
(I'm unsure this question will actually get me going vis a vis my XPU renders, but I gotta start debugging somewhere...)
I have a vague awareness that USD can rapidly create complexity via time samples if one isn't careful. So far, I haven't been particularly careful about the use of wrangles and python nodes in my scene, and I see several of them have little clock icons next to them, despite the intention of the code within to just do something once (i.e. it is not my intent to animate a camera position, though I am using a wrangle to set it).
How can I avoid creating time samples via the use of parameter expressions and wrangles? I can just use a timeShift at the end of everything to effectively kill all animation, but this seems blunt and I WOULD like to animate part of my scene. What are ways I can control this more thoughtfully?
(I'm unsure this question will actually get me going vis a vis my XPU renders, but I gotta start debugging somewhere...)
Edited by dhemberg - 2022年10月18日 14:02:25
Technical Discussion » How to read EXR metadata/header attribs?
- dhemberg
- 207 posts
- Offline
I was querying an attribute called "whiteLuminance"; when I middle mouse on my file, I see this:
When I load a recent Karma render and middle mouse on it, I see:
Is it the case that your python should be asking for "renderTime_s" rather than "renderTime"?
EDIT: Also, here is my Python; note that I first ask for "Attributes", then parse that:
When I load a recent Karma render and middle mouse on it, I see:
Is it the case that your python should be asking for "renderTime_s" rather than "renderTime"?
EDIT: Also, here is my Python; note that I first ask for "Attributes", then parse that:
# This is a shenanigan to read the "whiteLuminance" # attribute from my hand-made IBL texture, and set the # intensity on my light accordingly. import ast this_node = hou.pwd() ibl_reader = hou.node(this_node.evalParm("ibl_cop")) #path to my COP2 file reader # The ast module casts a string as a dictionary, # which is how Houdini represents exr metadata. metadata = ast.literal_eval(ibl_reader.getMetaDataString('attributes')) # Read the attribs I'm interested in. whiteLuminance = metadata['whiteLuminance'] measuredLux = metadata['MeasuredLUX']
Edited by dhemberg - 2022年10月17日 10:44:01
Technical Discussion » How to read EXR metadata/header attribs?
- dhemberg
- 207 posts
- Offline
Hi; I did, though it is pretty ugly. I used the approach @jsmack offered, as I never heard back about python bindings and oiiotool in Houdini. So, here is what I do:
--I make a COP2 network (mine lives in LOPS)
--File read the image I want metadata from
--In LOPS, I make a Python node
--I use python to get the COP2 file node, then use getMetaDataString() to read the full metadata header from the COP2 node.
--this returns a dictionary, I'm interested in the metadata attribute, e.g.
then proceed accordingly. I don't like this; if I can just read the metadata directly without having to go through COPS for the image reader, that would be much better.
--I make a COP2 network (mine lives in LOPS)
--File read the image I want metadata from
--In LOPS, I make a Python node
--I use python to get the COP2 file node, then use getMetaDataString() to read the full metadata header from the COP2 node.
--this returns a dictionary, I'm interested in the metadata attribute, e.g.
then proceed accordingly. I don't like this; if I can just read the metadata directly without having to go through COPS for the image reader, that would be much better.
Solaris and Karma » Karma XPU failure on 3090ti
- dhemberg
- 207 posts
- Offline
Hi;
I have a scene which I'd like to use XPU to render. Prior to today, I was working in this scene on a machine that contained one 1080 and one 1080ti graphics cards. The scene typically failed to render using these GPUs, I think because it was running out of VRAM. The indication I'm using for this is that when I shift over to Karma in the viewport, I see two OPTIX and one EMBREE process(es); after a few moments, the OPTIX ones say "fail", and the render soldiers on with the Embree process only (and, of course, this is punishingly slow).
So, I decided to roll the dice and invest in a shiny new 3090ti with 24GB VRAM, which showed up today. I installed it, ensured I had the latest drivers installed, loaded my scene and switched the viewport to Karma. The scene sat in an "initializing" state for nearly two minutes; I saw both OPTIX and EMBREE processes initializing. Then, crushingly, the OPTIX one said "fail". I tried resetting Karma and restarting the render, only now I see NO Optix process at all. Oddly, my Task Manager shows the GPU under some amount of load, though Karma seems to indicate it isn't seeing the card at all.
Other than trying to divine what's going on using just these viewport indicators, what other things can I do to debug this to better understand what might be going on?
I have a scene which I'd like to use XPU to render. Prior to today, I was working in this scene on a machine that contained one 1080 and one 1080ti graphics cards. The scene typically failed to render using these GPUs, I think because it was running out of VRAM. The indication I'm using for this is that when I shift over to Karma in the viewport, I see two OPTIX and one EMBREE process(es); after a few moments, the OPTIX ones say "fail", and the render soldiers on with the Embree process only (and, of course, this is punishingly slow).
So, I decided to roll the dice and invest in a shiny new 3090ti with 24GB VRAM, which showed up today. I installed it, ensured I had the latest drivers installed, loaded my scene and switched the viewport to Karma. The scene sat in an "initializing" state for nearly two minutes; I saw both OPTIX and EMBREE processes initializing. Then, crushingly, the OPTIX one said "fail". I tried resetting Karma and restarting the render, only now I see NO Optix process at all. Oddly, my Task Manager shows the GPU under some amount of load, though Karma seems to indicate it isn't seeing the card at all.
Other than trying to divine what's going on using just these viewport indicators, what other things can I do to debug this to better understand what might be going on?
Edited by dhemberg - 2022年10月16日 22:23:37
Solaris and Karma » Configuring layers
- dhemberg
- 207 posts
- Offline
Ah, you're doing the USD export from within the SOP modify; that's the bit I was missing. Thank you! Trying to remove the attribute and push it back over into LOPS doesn't seem to result in those attrs being removed (for reasons I could only educatedly guess at...nondestructive workflow something something).
Solaris and Karma » Configuring layers
- dhemberg
- 207 posts
- Offline
This is a great idea, thank you! I'm going to try this.
How does one reconfigure the layerSavePath after it's set? Like, ideally I would do a sopImport, using Rob's elegant method above, assign my materials and other render settings, then export a single frame. Then, set my layerSavePaths to different filepaths, remove extraneous data, and save the full length of the animation.
Also, how are you removing N and uv in a sopModify? I'm doing what I think is the obvious thing (unpacking geo as polys, attrDelete all vertex attribs), but it seems to not affect my geo, which is puzzling...
How does one reconfigure the layerSavePath after it's set? Like, ideally I would do a sopImport, using Rob's elegant method above, assign my materials and other render settings, then export a single frame. Then, set my layerSavePaths to different filepaths, remove extraneous data, and save the full length of the animation.
Also, how are you removing N and uv in a sopModify? I'm doing what I think is the obvious thing (unpacking geo as polys, attrDelete all vertex attribs), but it seems to not affect my geo, which is puzzling...
Edited by dhemberg - 2022年10月6日 22:37:52
Solaris and Karma » Configuring layers
- dhemberg
- 207 posts
- Offline
You're my hero, this is all so enlightening!
Oh I like this idea; how do I specify the separation of static tree geo vs. animation? Sounds like you're saying I export one (smaller) single frame USD per tree, then one (much larger) usd containing the animated tree, and then...?
BryanRay
Something else you might do to help yourself is to save a static version of the trees for use in layout and lighting, with the intent to layer the animation over in a later step. That way you need not load the heavier files until you actually need them, perhaps not until right before rendering.
Oh I like this idea; how do I specify the separation of static tree geo vs. animation? Sounds like you're saying I export one (smaller) single frame USD per tree, then one (much larger) usd containing the animated tree, and then...?
Solaris and Karma » Configuring layers
- dhemberg
- 207 posts
- Offline
BryanRay
The simplest way to do that would be to use the Cache LOP and increase the Increment value to something greater than 1. Positions will be interpolated between samples, so you won't get steppy motion, but you might still see the occasional discontinuity in motion if some of the discarded samples contained critical parts of an eased curve.
I'm trying this this morning, though it's not clear to me exactly how to set this up. If, say, I want to reduce my time samples to 1/5 the original amount, and interpolate in between each sample (something USD itself does?), do I set my Cache LOP behavior to "cache up to cooked frames", and the start/end/inc to something like $F-5, $F+5, 5?
Solaris and Karma » Configuring layers
- dhemberg
- 207 posts
- Offline
Ok this is terrific, thank you; I'm learning so much here. I did indeed fear that what I'm effectively doing is baking 100 copies of my tree into a single file if I'm exporting a 100-frame animation sequence of the tree (I understand these are time samples, but conceptually there's not a lot of opportunity for compression here). The tip about reducing the time samples is a good one, I'll experiment with how much I can get away with by stashing every, say, 4th frame or so.
Another question: Once I've written out my tree animations, loading them back in via a reference brings Houdini to its knees. This seems like the exact opportunity to leverage one of USD's allegedly most-powerful features, which I think is deferred loading of all this animation data...is that right? Is the term "payload" related to this idea? If so, do I need to do anything differently when exporting the animation? Or is it as simple as ticking off the 'load payloads' option on the Reference LOP? How do I, um, 'declare' something as a payload?
p.s. I was talking over my scene with a fellow coworker who has much more intimate experience with USD than I do, and he too brought up the UsdSkel idea, but then advised that my trees probably aren't an ideal case for that mechanism, unless I'm doing something very simple like just wiggling around the trunk a bit (which I am not).
Another question: Once I've written out my tree animations, loading them back in via a reference brings Houdini to its knees. This seems like the exact opportunity to leverage one of USD's allegedly most-powerful features, which I think is deferred loading of all this animation data...is that right? Is the term "payload" related to this idea? If so, do I need to do anything differently when exporting the animation? Or is it as simple as ticking off the 'load payloads' option on the Reference LOP? How do I, um, 'declare' something as a payload?
p.s. I was talking over my scene with a fellow coworker who has much more intimate experience with USD than I do, and he too brought up the UsdSkel idea, but then advised that my trees probably aren't an ideal case for that mechanism, unless I'm doing something very simple like just wiggling around the trunk a bit (which I am not).
Solaris and Karma » Configuring layers
- dhemberg
- 207 posts
- Offline
A followup question - which I realize is wholly different from my original one: when exporting my treees (which now works great: I get a separate file for each tree), I notice that when I export a frame range rather than a single frame, I seem to get absolutely enormous files (or, well, they are quite a bit larger than the bgeos from which they derive). I know USD makes it very easy to get a lot of complexity going quickly, but I feel compelled to ask if docs exist that discuss strategies (if any) for optimizing this?
To be clear, my trees are swaying gently in the breeze: so, constant point count, no IK involved, just some moving vertices. Am I doomed to massive files?
To be clear, my trees are swaying gently in the breeze: so, constant point count, no IK involved, just some moving vertices. Am I doomed to massive files?
Solaris and Karma » Configuring layers
- dhemberg
- 207 posts
- Offline
Solaris and Karma » Configuring layers
- dhemberg
- 207 posts
- Offline
Thank you so much! To try to say that back to you: past the point at which I'm importing my geo into LOPS (my single sopImport node), I am working with a stage that - at least in USD parlance - is designed to be non-destructive; changing layers past that point would constitute a destructive edit, and so while of course Houdini offers ways around this, I am bucking the intended way of accomplishing this to some degree.
I'm curious how one is meant to use the Configure Layer node - its existence sort of hinted to me that I should try doing this the way I did, but clearly I misunderstood its purpose a bit.
The "USD is meant to be nondestructive" co-existing inside Houdini where one can pretty much do anything at any time really twists my head around sometimes! I'm sympathetic as to how hard this must be to resolve from a developer standpoint.
I'm curious how one is meant to use the Configure Layer node - its existence sort of hinted to me that I should try doing this the way I did, but clearly I misunderstood its purpose a bit.
The "USD is meant to be nondestructive" co-existing inside Houdini where one can pretty much do anything at any time really twists my head around sometimes! I'm sympathetic as to how hard this must be to resolve from a developer standpoint.
Solaris and Karma » Configuring layers
- dhemberg
- 207 posts
- Offline
Amazing, thank you so much @robp_sidefx! I'm curious if I could trouble you to explain what I was doing wrong, or how this conceptually differs from what I was trying in my scene, as I'd love to understand this better. Inspecting the Scene Graph Layers panel, the data all seems the same (at least, to my untrained eye)...what is it about this setup that allows it to work?
Solaris and Karma » Configuring layers
- dhemberg
- 207 posts
- Offline
Continuing trying to explore this, here is a hip file that I think might be close but doesn't actually work yet (it writes out separate files per tree, but the files all seem to be empty!)
Curious what I'm doing wrong...
Curious what I'm doing wrong...
Solaris and Karma » Configuring layers
- dhemberg
- 207 posts
- Offline
Hmm, thanks for this! Though, to clarify my original description of what I'm doing: there is no "main tree"; I have ten wholly-different trees, each of which is different geo, different branch arrangement, etc. My understanding of variants is that I might use them in the case of, say, changing the seasons (i.e. leaf color) of a single tree, such that I have tree_01:summer, tree_01:autumn, etc.
But, what I'm trying to do first here is just manage these 10 different trees properly/efficiently. I think that means storing each tree in a different file, which I think == layers. Perhaps within each layer I can add variants to each tree for seasons (as an example).
I'm still green enough with USD that it *seems* like I should be able to specify an attribute on my tree geometry like @layerSavePath="/path/to/tree_01.usd", but I can't find any notion of this in the docs. All I can find is a description of the layerSavePath param on a sopImport node, which suggests I would need 10 different sopImport nodes in order to specify 10 different layer paths, which seems oddly un-Houdini-like. I'm sure I'm just not understanding something though...
But, what I'm trying to do first here is just manage these 10 different trees properly/efficiently. I think that means storing each tree in a different file, which I think == layers. Perhaps within each layer I can add variants to each tree for seasons (as an example).
I'm still green enough with USD that it *seems* like I should be able to specify an attribute on my tree geometry like @layerSavePath="/path/to/tree_01.usd", but I can't find any notion of this in the docs. All I can find is a description of the layerSavePath param on a sopImport node, which suggests I would need 10 different sopImport nodes in order to specify 10 different layer paths, which seems oddly un-Houdini-like. I'm sure I'm just not understanding something though...
Solaris and Karma » Configuring layers
- dhemberg
- 207 posts
- Offline
Hi;
I have a scene in which I'm generating a dozen or so procedural trees. At the moment, I lump all these trees together in one SOPnetwork; each tree has a path like:
When I do a SopImport to bring all these in, I see the USD heirarchy I expect, tree_01, tree_02, etc.
I'd like to write all of these out to disk, to use them elsewhere. At the moment, I put them all in one USD file, trees.usd. When I import this file elsewhere, my heirarchy is preserved and everything works fine.
However, I 1) would like to animate my trees, and 2) suspect that lumping everything into one file isn't very USD-like. I'm aware of the concept of layers, and see some options in the USD export node in LOPS that suggests that if I have a layer save path specified, I can generate one top-level file that contains N references to layer files. So, it seems like I should consider having:
I can't tell if there's an attribute I can set on my tree geometry to help split them all into separate layers though; the sopimport has one layer save path attr...can this be varied somehow based on a geometry attribute? Or, is there another way I could accomplish this splitting?
Thanks!
I have a scene in which I'm generating a dozen or so procedural trees. At the moment, I lump all these trees together in one SOPnetwork; each tree has a path like:
tree_01/
tree_01/leaves
tree_01/branches
tree_02/
tree_02/leaves
tree_02/branches
etc.
When I do a SopImport to bring all these in, I see the USD heirarchy I expect, tree_01, tree_02, etc.
I'd like to write all of these out to disk, to use them elsewhere. At the moment, I put them all in one USD file, trees.usd. When I import this file elsewhere, my heirarchy is preserved and everything works fine.
However, I 1) would like to animate my trees, and 2) suspect that lumping everything into one file isn't very USD-like. I'm aware of the concept of layers, and see some options in the USD export node in LOPS that suggests that if I have a layer save path specified, I can generate one top-level file that contains N references to layer files. So, it seems like I should consider having:
trees.usd tree_01.usd tree_02.usd etc.
I can't tell if there's an attribute I can set on my tree geometry to help split them all into separate layers though; the sopimport has one layer save path attr...can this be varied somehow based on a geometry attribute? Or, is there another way I could accomplish this splitting?
Thanks!
Edited by dhemberg - 2022年10月3日 13:52:38
Technical Discussion » Labs trees + vellum
- dhemberg
- 207 posts
- Offline
As an update to my question: I've tried attaching my leaves to my animated tree mesh using a PointDeform node, which seems to work well - I can scrub the timeline and my leaves appear to rigidly 'stick' to my moving branches.
Doing this allows me to omit the AttachToGeometry constraint in my vellum network. When I sim now, though, I cannot seem to get any cloth-like flutter on the leaves at all. I admit that I don't fully understand the interplay between the pointDeform node and the vellum solver, but no matter how I adjust things like stiffness/bend/stretch, the leaves refuse to do anything other than rigidly stick to the branch.
Several attempts at googling this problem have turned up nothing useful that I can find, so trying again with posting here.
thanks!
Doing this allows me to omit the AttachToGeometry constraint in my vellum network. When I sim now, though, I cannot seem to get any cloth-like flutter on the leaves at all. I admit that I don't fully understand the interplay between the pointDeform node and the vellum solver, but no matter how I adjust things like stiffness/bend/stretch, the leaves refuse to do anything other than rigidly stick to the branch.
Several attempts at googling this problem have turned up nothing useful that I can find, so trying again with posting here.
thanks!
Technical Discussion » Labs trees + vellum
- dhemberg
- 207 posts
- Offline
Hi!
I have a tree I've built using the labs tree tools. It's great! I would now like to make it sway gently in a breeze.
To do this, I take the curves produced by the labs tree gen network and set up a vellum network to move them around a little, treating them like hairs. Then, I use a PointDeform to use these hairs to move the tree geometry itself around. This works great (for elaboration, I'm following this [www.youtube.com] tutorial).
My tree has Maple-like leaves (unlike the aforementioned tutorial, which has long willow-like leaves). So, I try attaching these leaves to my animated branch using a vellum AttachToGeometry constraint.
Here's where my question comes in: what I see is my leaves sort of spinning around their root pin points freely, as though they're just point-constrained, with no regard to the direction in which they originally pointed. I'm curious how I might encourage them to behave more "leaf-like", without spinning freely around the branch.
I found this article [www.tokeru.com] by @tokeru, in which they describe using a copyToPoints to pin leaves to geometry per point, but this seems to presume a different construction method than is used by the labs tree tool.
Fundamentally I feel like this is a pretty obvious thing to want: a tree that moves around with some keep-alive. So, I'm curious how others might be approaching this, and what I might be doing wrong that leads me to have this leaf-spinning-around-root-points problem?
Thank you!
I have a tree I've built using the labs tree tools. It's great! I would now like to make it sway gently in a breeze.
To do this, I take the curves produced by the labs tree gen network and set up a vellum network to move them around a little, treating them like hairs. Then, I use a PointDeform to use these hairs to move the tree geometry itself around. This works great (for elaboration, I'm following this [www.youtube.com] tutorial).
My tree has Maple-like leaves (unlike the aforementioned tutorial, which has long willow-like leaves). So, I try attaching these leaves to my animated branch using a vellum AttachToGeometry constraint.
Here's where my question comes in: what I see is my leaves sort of spinning around their root pin points freely, as though they're just point-constrained, with no regard to the direction in which they originally pointed. I'm curious how I might encourage them to behave more "leaf-like", without spinning freely around the branch.
I found this article [www.tokeru.com] by @tokeru, in which they describe using a copyToPoints to pin leaves to geometry per point, but this seems to presume a different construction method than is used by the labs tree tool.
Fundamentally I feel like this is a pretty obvious thing to want: a tree that moves around with some keep-alive. So, I'm curious how others might be approaching this, and what I might be doing wrong that leads me to have this leaf-spinning-around-root-points problem?
Thank you!
-
- Quick Links