No, you can only use one or the other.
If you're careful, and know that what you're doing is "safe" (you'll have to do your own research to make such a determination), you can accomplish something very similar. Call editableStage, but then create an Sdf.ChangeBlock, and use Sdf APIs on the "edit target layer" rather than using Usd APIs on the Usd.Stage. Using this approach you could create all your prims inside an Sdf.ChangeBlock, end the change block, then use the Usd APIs to author your collection.
Also, there is nothing in USD that can _only_ be done with USD APIs. Because all Usd APIs boil down to calling Sdf APIs in the C++ code. It just may require some investigation and experimentation to figure out what Sdf APIs you need to call to accomplish what the Usd APIs are doing.
Found 1093 posts.
Search results Show results as topic list.
Solaris and Karma » .stage() / .editableStage() usage
-
- mtucker
- 4537 posts
- Offline
Solaris and Karma » .stage() / .editableStage() usage
-
- mtucker
- 4537 posts
- Offline
The code there will never hit the "trying to lock a stage that is already locked" warning. Because calling node.inputs().stage() doesn't actually "lock" the stage. It returns the stage for the input node, but immediately releases the lock. So when you then call node.editableStage() later, which does hold onto a lock, you're still safe from the "already locked" warning. If you were to call node.inputs().stage() again after calling node.editableStage(), _then_ you'd get the warning and suffer the poor performance. But you aren't doing that.
So is the second approach to the code still the right one? Yes.
Why? Because the first block of code is very misleading. It gives the impression that "read_stage" and "write_stage" are separate things. But they aren't. As explained in the horizon video, the input node and the python node share a single USD stage. So read_stage and write_stage will be the _same stage_. When you modify write_stage, you are also modifying read_stage. Not fatal, and maybe not even problematic in your node, but misleading.
If you know that each of your edits is guaranteed to be completely independent from all the others, it would be totally fine to do:
```
stage = node.editableStage()
for path in paths:
# Read attribute per prim from stage
# Write: Create References in stage
```
Which is actually identical to your first block of code in terms of behavior, but makes it clear that you are reading from and writing to the same stage. But if there is any chance that one edit will affect the "read" of a subsequent edit, then this may not be a good way to go.
I'll also say that the pure python part of reading a bunch of stuff and putting it into a dictionary from which you later read the values to apply the write operations will take an inconsequentially small amount of time. 99% of the time to run your python LOP will be in the bit where you are creating the references, regardless of how you structure the rest of the code (don't believe me on this - measure it, because I may be wrong).
If you really want to make your node faster, figure out how to use the Sdf APIs on the node.editableLayer() instead of using Usd APIs on the write_stage. Especially if you're creating a lot of prims, this will make a huge difference. If you use this approach, you can use the new node.uneditableStage() method (added in 20.5.403) after calling editableLayer(). This will give you access to a readable stage that will be unaffected by the Sdf edits you make to the editableLayer.
So is the second approach to the code still the right one? Yes.
Why? Because the first block of code is very misleading. It gives the impression that "read_stage" and "write_stage" are separate things. But they aren't. As explained in the horizon video, the input node and the python node share a single USD stage. So read_stage and write_stage will be the _same stage_. When you modify write_stage, you are also modifying read_stage. Not fatal, and maybe not even problematic in your node, but misleading.
If you know that each of your edits is guaranteed to be completely independent from all the others, it would be totally fine to do:
```
stage = node.editableStage()
for path in paths:
# Read attribute per prim from stage
# Write: Create References in stage
```
Which is actually identical to your first block of code in terms of behavior, but makes it clear that you are reading from and writing to the same stage. But if there is any chance that one edit will affect the "read" of a subsequent edit, then this may not be a good way to go.
I'll also say that the pure python part of reading a bunch of stuff and putting it into a dictionary from which you later read the values to apply the write operations will take an inconsequentially small amount of time. 99% of the time to run your python LOP will be in the bit where you are creating the references, regardless of how you structure the rest of the code (don't believe me on this - measure it, because I may be wrong).
If you really want to make your node faster, figure out how to use the Sdf APIs on the node.editableLayer() instead of using Usd APIs on the write_stage. Especially if you're creating a lot of prims, this will make a huge difference. If you use this approach, you can use the new node.uneditableStage() method (added in 20.5.403) after calling editableLayer(). This will give you access to a readable stage that will be unaffected by the Sdf edits you make to the editableLayer.
Solaris and Karma » Faster render times depending on geometry type?
-
- mtucker
- 4537 posts
- Offline
There is going to be some additional time to load files like fbx or obj, translate those files into equivalent SOP form then USD form, running the LOP nodes... But this time is going to be pretty inconsequential compared to the time spent actually rendering.
So I'd advise doing the thing that makes the user workflow as smooth as it can be. Having to pre-convert source data into USD files, or pre-cache lights into a USD file are annoying extra steps for the user (or for you while you work on this HDA) which will cost way more time than doing the geo translation and light creation "just in time".
This balance may switch if the geometry is incredibly complicated. Of course the more complicated the data the longer the render takes, so it's still a small percentage of the time spent. But this time may become noticeable to the user. So don't actually trust anything I say. Measure how long these various steps take with your actual data.
So I'd advise doing the thing that makes the user workflow as smooth as it can be. Having to pre-convert source data into USD files, or pre-cache lights into a USD file are annoying extra steps for the user (or for you while you work on this HDA) which will cost way more time than doing the geo translation and light creation "just in time".
This balance may switch if the geometry is incredibly complicated. Of course the more complicated the data the longer the render takes, so it's still a small percentage of the time spent. But this time may become noticeable to the user. So don't actually trust anything I say. Measure how long these various steps take with your actual data.
Solaris and Karma » Write out USD with Animated Switch in LOPS?
-
- mtucker
- 4537 posts
- Offline
This has nothing to do with the switch, AFAICT. The difference is that on the "doesn't work" side, you have set the Output File to be blank. So the root layer of the stage (the one that adds the BASE_LAY as a sublayer) isn't being written out. So the SETDEC_LAY on the "doesn't work" side is simply not sublayering in BASE_LAY. The simplest fix here is to change the sublayer2 LOP to turn off "Edit Root Layer". This will add the BASE_LAY layer as a sublayer of the SETDEC_LAY layer you are authoring/altering with the cube/sphere/pig LOPs.
Solaris and Karma » Write out USD with Animated Switch in LOPS?
-
- mtucker
- 4537 posts
- Offline
There is no reason this shouldn't "just work". The Switch LOP input index parameter can definitely be animated. The time dependence may disappear downstream if you have a Cache LOP between the Switch and the ROP, but then the Cache LOP would have already cooked the animated switch value. Keep in mind that USD scene graph hierarchy cannot change over time, so if one switch input creates /cube and the other switch input creates /sphere, and your input index is "$F%2", saving out the first two frames will result in a USD file with both /cube and /sphere in it. But the ROP has the "track prim existence to author visibility" which will mark the sphere invisible on frame 1, and the cube invisible on frame 2.
Solaris and Karma » Clone control panel error
-
- mtucker
- 4537 posts
- Offline
I'm sorry, I have not seen this myself. And nobody else is chiming in here so I assume it's not a terribly common problem. That's going to make it tough to track down...
By the IP address I'm assuming this is a local clone you are creating? Have you got any unusual plugins running in Houdini, or is this basically a vanilla installation?
By the IP address I'm assuming this is a local clone you are creating? Have you got any unusual plugins running in Houdini, or is this basically a vanilla installation?
Solaris and Karma » /HoudiniLayerInfo prototype data not travelling
-
- mtucker
- 4537 posts
- Offline
Sure! There are scenarios in which all of the potential pitfalls I mention are avoided, and no penalty is paid for doing things "wrong". And if you have some way to guarantee the absence of the scenarios that lead to these issues, then you can ignore my dire warnings. But I've learned to never underestimate a user's ability to work around my rock solid guarantees of "safety", so I try to program with a heaping spoonful of paranoia.
Solaris and Karma » Get LopNode.stage() with specific context options
-
- mtucker
- 4537 posts
- Offline
No, there is no way to do this in python right now. I think the right solution would be to add an optional "context_options_override" parameter to the hou.LopNode.stage() method (which is the function that you would generally use to cook a LOP node and access the resulting stage). Feel free to RFE this.
Solaris and Karma » USD and AMD material library
-
- mtucker
- 4537 posts
- Offline
Houdini has to download materials from the AMD library to use them. When you switch to this Material Catalog, it asks you where you want to download the materials. Just specify a location that is accessible on all machines. Or if you don't share a network, specify a location that is in the directory structure that you package up to send the rest of your data.
Solaris and Karma » /HoudiniLayerInfo prototype data not travelling
-
- mtucker
- 4537 posts
- Offline
As discussed in my talk about optimizing LOP performance from the recent Horizon Hive (https://www.sidefx.com/houdini-hive/hive-horizon-2024/), you should collect all data you might need from the current node's parameters and all input (or other node's) stages before you ever call editableStage or editableLayer. So I would say the correct solution is to use two separate loops. The first one fetching all the values from the HoudiniLayerInfo on the prototype's stage, and stashing the values into some python data structure. Then do a second loop that adds each of the values to the HoudiniLayerInfo on the output's editableStage.
Calling both `instage = input.stage()` and `outstage = output.editableStage()` before either reading or writing anything can have two possible really bad outcomes. If you call stage() first, then calling editableStage() second may actually _change stage_ if the input and output share a common underlying USD stage. If you call editableStage() first, you may get the dreaded "tried to lock a stage that was already locked" message, which is basically a guarantee that your network will run slowly.
Calling stage(), reading the data, calling editableStage(), and finally writing the data avoids any possibility of either of these problems. You can still use the "unrolled" code from inside loputils if you want to avoid calling stage() and editableStage() multiple times, which will be at least a little bit faster, especially if you have a lot of data items to copy over.
Calling both `instage = input.stage()` and `outstage = output.editableStage()` before either reading or writing anything can have two possible really bad outcomes. If you call stage() first, then calling editableStage() second may actually _change stage_ if the input and output share a common underlying USD stage. If you call editableStage() first, you may get the dreaded "tried to lock a stage that was already locked" message, which is basically a guarantee that your network will run slowly.
Calling stage(), reading the data, calling editableStage(), and finally writing the data avoids any possibility of either of these problems. You can still use the "unrolled" code from inside loputils if you want to avoid calling stage() and editableStage() multiple times, which will be at least a little bit faster, especially if you have a lot of data items to copy over.
Solaris and Karma » Payload under Instances | Prim Pattern under Instances
-
- mtucker
- 4537 posts
- Offline
I'm surprised that you can have different payloads loaded under instanceable prims, but it looks like USD supports it... given this fact, the Configure Stage LOP should be allowing instance proxies on the "load paths from input" pattern, and the fact that it's not doing this is a bug. If you could please submit a bug report to support@sidefx.com someone can look into addressing this. Thanks!
Technical Discussion » Solaris workflow questions
-
- mtucker
- 4537 posts
- Offline
Hey Tom, hopefully we can clear a lot of this up quickly... But to start, can you clarify what you're trying to accomplish here? I'm guessing that you want to create a bunch of instances in LOPs, and I'm guessing you're happy ending up with a point instancer in USD (since I think this maps most closely to the SOP "instancefile" workflow?). If not, my answers below may be way off base...
a) I assume you're using an Instancer LOP to create your point instancer? I would recommend using a Reference LOP to bring in your "instance files". You can use wildcard file patterns to create a whole bunch of "prototypes" in LOPs, then just connect the output of this LOP to the second input of your Instancer LOP. Inside the instancer LOP, position your points and assign prototypes.
b) When in the select state, if you want to select point instancer instances, you have to change the menu at the top of the viewport to one of the "point instance" selection modes, rather than "leaf primitives", because point instancer instance don't map to unique USD prims. Once you've selected the instances you want to modify, you can hit "t" to put down an Edit LOP and move them. Or you can use the Modify Point Instances tool to quickly hide/delete them. Or you can dive into SOPs and modify the transforms and/or delete them and/or set primvars, etc.
c) Houdini 20.5 has the ability to choose a LOP camera from the camera menu when you are inside the Instancer LOP.
d) If you are creating a point instancer, I think you want to do your material binding on the prototypes (before feeding them into the instancer LOP)?
a) I assume you're using an Instancer LOP to create your point instancer? I would recommend using a Reference LOP to bring in your "instance files". You can use wildcard file patterns to create a whole bunch of "prototypes" in LOPs, then just connect the output of this LOP to the second input of your Instancer LOP. Inside the instancer LOP, position your points and assign prototypes.
b) When in the select state, if you want to select point instancer instances, you have to change the menu at the top of the viewport to one of the "point instance" selection modes, rather than "leaf primitives", because point instancer instance don't map to unique USD prims. Once you've selected the instances you want to modify, you can hit "t" to put down an Edit LOP and move them. Or you can use the Modify Point Instances tool to quickly hide/delete them. Or you can dive into SOPs and modify the transforms and/or delete them and/or set primvars, etc.
c) Houdini 20.5 has the ability to choose a LOP camera from the camera menu when you are inside the Instancer LOP.
d) If you are creating a point instancer, I think you want to do your material binding on the prototypes (before feeding them into the instancer LOP)?
Solaris and Karma » Solaris material gallery?
-
- mtucker
- 4537 posts
- Offline
`assetutils` is a module inside the `husd` module. So use `from husd import assetutils` to gain access to that module.
Edited by mtucker - 2024年10月6日 21:48:07
Solaris and Karma » Context Options do not trigger parm changed script callback
-
- mtucker
- 4537 posts
- Offline
1. You say "expression callbacks"... Just to make sure I understand, you mean that the expressions get evaluated a lot, right? That is likely to happen as a result of showing the parameter dialog for the node.
2. You should really not be "doing things" as a result of evaluating expressions. The time to take actions is either during a cook (though this approach can also be difficult to get right), or most controllably, in response to a user action (like clicking a button). This is why Houdini nodes like File Cache have explicit buttons that the user has to press when they want to update the "state" of the node.
Taking the example of updating a thumbnail, you certainly don't want to create a thumbnail just because an expression is evaluated. And looking at the three node hip file in your video, even creating the thumbnail as part of the "cook" is probably not going to do what the user wants. This is because the first time you display the left hand node, the top node will cook and generate a thumbnail. Then when you display the second node, the top node will cook again, and generate a different thumbnail. And from that point on, switching between those two bottom node will no longer require the top node to cook, so the thumbnail will not be generated again.
So you could require the user to press a button on the top node to generate the thumbnail, or you could move the thumbnail generation to a separate node, and in your network put one "thumbnail" node on each of your two branches. Then you could do the thumbnail generation on cook, and you would have the advantage of being able to see both thumbnails at once...
Anyway, I'm really just guessing at what your planned overall workflow is, but hopefully this gives you some food for thought.
2. You should really not be "doing things" as a result of evaluating expressions. The time to take actions is either during a cook (though this approach can also be difficult to get right), or most controllably, in response to a user action (like clicking a button). This is why Houdini nodes like File Cache have explicit buttons that the user has to press when they want to update the "state" of the node.
Taking the example of updating a thumbnail, you certainly don't want to create a thumbnail just because an expression is evaluated. And looking at the three node hip file in your video, even creating the thumbnail as part of the "cook" is probably not going to do what the user wants. This is because the first time you display the left hand node, the top node will cook and generate a thumbnail. Then when you display the second node, the top node will cook again, and generate a different thumbnail. And from that point on, switching between those two bottom node will no longer require the top node to cook, so the thumbnail will not be generated again.
So you could require the user to press a button on the top node to generate the thumbnail, or you could move the thumbnail generation to a separate node, and in your network put one "thumbnail" node on each of your two branches. Then you could do the thumbnail generation on cook, and you would have the advantage of being able to see both thumbnails at once...
Anyway, I'm really just guessing at what your planned overall workflow is, but hopefully this gives you some food for thought.
Solaris and Karma » Configure Stage - Remove Load Paths from Input
-
- mtucker
- 4537 posts
- Offline
Okay, so one piece of this was easy enough that I did it immediately... In 20.5.375, there is a new "%payload" auto-collection which matches any prim with a payload composition arc on it. So in your Configure Stage LOP, you can use "add/remove payloads" mode, put "%payload" in the "load paths from input" and "%kind(component)" in the "remove load paths from input" and get the result that I think you're looking for.
Solaris and Karma » Configure Stage - Remove Load Paths from Input
-
- mtucker
- 4537 posts
- Offline
This is the same issue again... You're asking the LOP node to "remove" geo_payload from the explicit list of primitive for which you have asked to load the payloads. But by saying "load all primitives" with the first configure stage LOP, you haven't provided a list of primitives whose payloads should be loaded. So you are essentially switching the "load mask" to say "load this specific list of payloads", but the list of payloads to load is empty. Then you're saying "remove this one prim from the list of payloads to load", which does nothing because it's removing that payload from an empty list. So you end up loading nothing.
I'll be the first to admit this is all much more complicated and confusing than it needs to be. I can think of a few ways to make this all behave in a more understandable/expected manner. But this will all take some time to sort out. For now you'll have to figure out how to create an explicit list of payloads to load, and _then_ you can remove geo_payload from that list...
I'll be the first to admit this is all much more complicated and confusing than it needs to be. I can think of a few ways to make this all behave in a more understandable/expected manner. But this will all take some time to sort out. For now you'll have to figure out how to create an explicit list of payloads to load, and _then_ you can remove geo_payload from that list...
Solaris and Karma » Context Options do not trigger parm changed script callback
-
- mtucker
- 4537 posts
- Offline
In what way do you want your HDA to change in response to the context option value change? The most robust way to do this is by using expressions on your HDA that reference the context option values to alter the behavior of the HDA on a per-cook basis. This is the _only_ thing you can do if you want to drive your HDA using "Edit Context Option" LOPs instead of using "global" context options (Edit -> Context Options).
But maybe you need to respond to context option changes with more drastic changes to your HDA (adding or removing parms, etc). And you are expecting these changes to be driven only by changes to the global context option state. In this case you can use a "context option change callback", `hou.addContextOptionChangeCallback`.
But maybe you need to respond to context option changes with more drastic changes to your HDA (adding or removing parms, etc). And you are expecting these changes to be driven only by changes to the global context option state. In this case you can use a "context option change callback", `hou.addContextOptionChangeCallback`.
Technical Discussion » Rendering time offset instances in Solaris.
-
- mtucker
- 4537 posts
- Offline
If you write out one USD file with the animated "grow" for a single instance, you can use the "retime instances" LOP to create a bunch of different time offset values. Even if each instance is a unique offset time, the resulting USD will be a lot smaller.
In general don't use per-frame USD files. When someone says "cache your animation to USD" that means a single USD file with the animation in it (at least if I'm saying it - I don't want to put words in @eikonoklastes's mouth). Use value clips and per-frame USD files only if you know exactly why you are doing it. If you don't know USD well enough to know why it's better in your current situation it almost certainly is not better for your current situation.
Hope that helps!
In general don't use per-frame USD files. When someone says "cache your animation to USD" that means a single USD file with the animation in it (at least if I'm saying it - I don't want to put words in @eikonoklastes's mouth). Use value clips and per-frame USD files only if you know exactly why you are doing it. If you don't know USD well enough to know why it's better in your current situation it almost certainly is not better for your current situation.
Hope that helps!
Solaris and Karma » Geometry Clip Sequence Lop or how to handle large data
-
- mtucker
- 4537 posts
- Offline
Have you tried turning on the (relatively new) "All Clip Files Have Matching Scene Graph Structure" parameter? Enabling this option should prevent the geo clip sequence LOP from having to open all the clip files in order to author the clip data (it normally has to check all the clip files in order to build the manifest and topology files). Turning off "Flatten Clip Files" (if you know this step isn't necessary, as it would not be for BGEO clip files because they don't support composition) can also help improve the performance of this node.
But to work this way, you are responsible for ensuring that the scene graph structure is not changing over time and that the clip files don't need to be flattened.
But to work this way, you are responsible for ensuring that the scene graph structure is not changing over time and that the clip files don't need to be flattened.
Solaris and Karma » Edit context options not updating evaluated fields?
-
- mtucker
- 4537 posts
- Offline
The Edit Context Options LOP does create context options, but only in the "context" of cooking that LOP node (and the inputs to that LOP node). So you can only read context options created by an edit context options LOP on expressions on LOP nodes connected (directly or indirectly) to the input of the Edit Context Options LOP. Can you attach a hip file that shows the problem you're seeing with Edit Context Option LOP not working as you expect?
-
- Quick Links