A manifest is now required, but you don't need to provide it… If one isn't provided, it will be generated at composition time automatically, if I'm not mistaken. Which is cool, but probably a feature you only want to use in very limited circumstances. Probably makes more sense when the clip files are animation sequences that you're typing together. Not so great for file-per-frame FX data.
usdstitchclips creates two files. One is the manifest file. The other is a file containing the reference to the manifest file and the value clip metadata.
Found 1020 posts.
Search results Show results as topic list.
Solaris and Karma » Authoring value clips from bgeo sequences
- mtucker
- 4439 posts
- Offline
Solaris and Karma » Authoring value clips from bgeo sequences
- mtucker
- 4439 posts
- Offline
Here's a simple example file that uses BGEO files directly as value clips. The trick is that the BGEO files need to have a detail attribute added to them instructing the BGEO USD import plugin to generate attribute values at time samples at a particular time.
Solaris and Karma » Where is Configure Stage meta data stored?
- mtucker
- 4439 posts
- Offline
That information is available on the LOP node stage itself. You can get the stage population mask, the rules used to determine the se of loaded payloads, and the actual expanded list of prims that will load their payloads:
hou.node("/stage/lopname").stage().GetPopulationMask() hou.node("/stage/lopname").stage().GetLoadRules() hou.node("/stage/lopname").stage().GetLoadSet()
Solaris and Karma » How to delete prims from exsiting usd?
- mtucker
- 4439 posts
- Offline
I you don't mind flattening your layer stack and/or flattening the whole stage, you can use the Stage Manager LOP to delete prims from a single USD layer.
Solaris and Karma » OSX+Lops frequent network editor dropouts?
- mtucker
- 4439 posts
- Offline
Solaris and Karma » Solaris foreatch: Local variable 'ITERATIONVALUE' not found.
- mtucker
- 4439 posts
- Offline
This is a known issue that has been fixed for the next release. The problem is that you are asking the node inside the for each loop to cook on its own. Because you are cooking it directly, rather than having the for each LOP cook it, there is no value set for the ITERATIONVALUE context option (it's the For Each LOP that sets this value and passes it to the cook method of the input LOP).
Also, feeding the node from inside the for each loop into the first input as well as the third input of the For Each node is unlikely to ever work, and I'm not clear what you're trying to accomplish there. The ITERATIONVALUE will never be set cooking that first input, so you'll always get a warning/unexpected results through that first input. You can probably just disconnect that node from the first input? Or, depending on what you're trying to accomplish, you may want to connect the node from above the “begin for each” node into that first input?
Also, feeding the node from inside the for each loop into the first input as well as the third input of the For Each node is unlikely to ever work, and I'm not clear what you're trying to accomplish there. The ITERATIONVALUE will never be set cooking that first input, so you'll always get a warning/unexpected results through that first input. You can probably just disconnect that node from the first input? Or, depending on what you're trying to accomplish, you may want to connect the node from above the “begin for each” node into that first input?
Solaris and Karma » reference lop default primitive expression doesn't work
- mtucker
- 4439 posts
- Offline
Solaris and Karma » Primvars and Groups
- mtucker
- 4439 posts
- Offline
I'm going to take a different approach on this question and assume that you are trying to define “groups” within a single Mesh primitive, rather than defining “groups” of USD primitives…
If so, then what you're looking for are Geometry Subsets in USD. There is the Geometry Subset LOP to define subsets, and the Material Assignment LOP can actuall create geometry subsets and assign materials in a single node. I've attached a file that demonstrates both of these nodes in a very simple way.
If you have already defined your mesh “partition” using a primvar in USD, you can easily use SOPs to define a SOP group based on this primvar attribute. Primitive groups in SOPs get imported to LOPs as Geometry Subset when using the SOP Import LOP (though you may need to specify the names of the SOP groups that you want to import as subsets - I don't recall exactly).
If so, then what you're looking for are Geometry Subsets in USD. There is the Geometry Subset LOP to define subsets, and the Material Assignment LOP can actuall create geometry subsets and assign materials in a single node. I've attached a file that demonstrates both of these nodes in a very simple way.
If you have already defined your mesh “partition” using a primvar in USD, you can easily use SOPs to define a SOP group based on this primvar attribute. Primitive groups in SOPs get imported to LOPs as Geometry Subset when using the SOP Import LOP (though you may need to specify the names of the SOP groups that you want to import as subsets - I don't recall exactly).
Solaris and Karma » reference lop default primitive expression doesn't work
- mtucker
- 4439 posts
- Offline
It's just an error in that expression… Change ‘reffilepath’ to ‘filepath1’. I'll correct that.
Solaris and Karma » looping a sublayer with clips breaks with variants
- mtucker
- 4439 posts
- Offline
I'm not 100% certain, because I don't have a full grasp of exactly how Value Clips work, and where they fit into the “LIVRPS” hierarchy, or if (as it seems) they are somehow outside that hierarchy… But the difference between Reference and Sublayer in your hip file is that when you feed a node into an Add Variant LOP, we do layer flattening of the input so that we can do a direct copy of the SdfSpecs from the second input into the variant portion of the scene graph. In the reference case, flattening the input leaves the reference intact as a reference arc, so the value clip opinions “win”. In the sublayer case, the data from the sublayer is flattened and that data gets copied inside the variant. Somehow the presence of the opinions in the save layer where the value clip is defined prevents the value clip opinions from winning.
You can see the same thing happen with a Graft LOP instead of an Add Variant, as a Graft LOP does the same thing - flattening the secondary input data and copying it directly into the active layer. The value clip wins against data pulled in by reference, but loses if the opinions are set in the active layer.
You can see the same thing happen with a Graft LOP instead of an Add Variant, as a Graft LOP does the same thing - flattening the secondary input data and copying it directly into the active layer. The value clip wins against data pulled in by reference, but loses if the opinions are set in the active layer.
Solaris and Karma » Deeper understanding of the Layer Break LOP
- mtucker
- 4439 posts
- Offline
You left out the most important part of the sentence in describing the second thing that a layer break does:
2. It marks all existing sublayers as having been authored prior to a layer break.
Maybe you understood this and know what it means, and were just leaving out the second half of the sentence for brevity, but I just wanted to point it out for anyone new to layer breaks reading this and wodering what it means for a layer to “be authored”.
But you make a good point that there is no way to see (in the Layers panel or Layer Stack tab of the Scene Graph Details) which layers were authored above a layer break. This information is not stored in the USD of the layer itself, and so inspecting the USD data doesn't let you make this determination. I'll add an RFE to make this information visible in the UI somewhere.
2. It marks all existing sublayers as having been authored prior to a layer break.
Maybe you understood this and know what it means, and were just leaving out the second half of the sentence for brevity, but I just wanted to point it out for anyone new to layer breaks reading this and wodering what it means for a layer to “be authored”.
But you make a good point that there is no way to see (in the Layers panel or Layer Stack tab of the Scene Graph Details) which layers were authored above a layer break. This information is not stored in the USD of the layer itself, and so inspecting the USD data doesn't let you make this determination. I'll add an RFE to make this information visible in the UI somewhere.
Solaris and Karma » turn off timevarying faceVertexCounts?
- mtucker
- 4439 posts
- Offline
There is probably more that we could do during the stitching process, but it would be complicated to implement, complicated to control/manage, and of dubious value (because of the existing USDC data deduplication and the fact that in Hydra, which there could be a real benefit to ensuring topology doesn't change over time, attributes are either time-varying or they aren't - so being time varying at only one time doesn't really make things better). So it has not been a high priority item for us.
Technical Discussion » Houdini Hard Crash on Node delete
- mtucker
- 4439 posts
- Offline
Even Indie users can log bugs, and especially in the case of crashes we'd really like to get to the bottom of it if we can. If you can submit a crash log that would be very helpful. From your description, I don't think we'll be able to do much without a crash log since presumably there is something specific about your system that is problematic. Most users don't experience this crash or I'm pretty sure we'd know about it by now
Technical Discussion » Context Options - (LOPs/TOPs)
- mtucker
- 4439 posts
- Offline
The local variables and context options provided by each node should be listed in the documentation for that node. At the moment these are not really discoverable in the UI.
Technical Discussion » Context Options - (LOPs/TOPs)
- mtucker
- 4439 posts
- Offline
There are three different “things” that can be referred to with the “@” syntax:
1. Local variables. These are defined by the node itself during a cook. In SOPs, @P is a local variable representing the value of the P attribute (of the currently evaluating point).
2. Context options. These can be set at a global level using Edit->Context Options, in which case they are like “global variables” or “environment variables” (like $HIP). But context options can also be manipulated within the LOP cook chain, at which point they are “scoped” or “semi-local”. They provide a way for a small set of nodes to share a value. This is what the edit context options node does, and also nodes like For Each and Add Variant LOPs use context options to “transmit” useful bits of information to their input nodes (normally data flows strictly down a chain of nodes, context options reverse that direction of information flow for a few discrete values).
3. TOP Attributes (sorry, I don't actually know the terminology here). These are very similar to context options that are defined by a TOP node and “transmitted” to the nodes that are cooked by that TOP node. Again, “scoped” or “semi-local” variables that have different values for each work item processed by the TOP node.
I should at least mention that VEX “binding” can also be done with the “@” operator, but only within VEX snippet/wrangle nodes. This use has very little to do with the other uses here.
So, why use the same @ symbol to refer to all these different things? Because as a consumer of these variables you shouldn't care where they are coming from. They are just values that can be used to influence the behavior of your node. Whether the “@” is driven by a local context option, a global context option, or a TOP node effectively overriding a global context option doesn't matter. Accessing any of them is easy, because you only need to know the one syntax. And if you decide that a global context option should really be driven by a TOP attribute, or a local context option should be made global, you don't need to touch your nodes, you just set the value differently.
Hope that helps.
1. Local variables. These are defined by the node itself during a cook. In SOPs, @P is a local variable representing the value of the P attribute (of the currently evaluating point).
2. Context options. These can be set at a global level using Edit->Context Options, in which case they are like “global variables” or “environment variables” (like $HIP). But context options can also be manipulated within the LOP cook chain, at which point they are “scoped” or “semi-local”. They provide a way for a small set of nodes to share a value. This is what the edit context options node does, and also nodes like For Each and Add Variant LOPs use context options to “transmit” useful bits of information to their input nodes (normally data flows strictly down a chain of nodes, context options reverse that direction of information flow for a few discrete values).
3. TOP Attributes (sorry, I don't actually know the terminology here). These are very similar to context options that are defined by a TOP node and “transmitted” to the nodes that are cooked by that TOP node. Again, “scoped” or “semi-local” variables that have different values for each work item processed by the TOP node.
I should at least mention that VEX “binding” can also be done with the “@” operator, but only within VEX snippet/wrangle nodes. This use has very little to do with the other uses here.
So, why use the same @ symbol to refer to all these different things? Because as a consumer of these variables you shouldn't care where they are coming from. They are just values that can be used to influence the behavior of your node. Whether the “@” is driven by a local context option, a global context option, or a TOP node effectively overriding a global context option doesn't matter. Accessing any of them is easy, because you only need to know the one syntax. And if you decide that a global context option should really be driven by a TOP attribute, or a local context option should be made global, you don't need to touch your nodes, you just set the value differently.
Hope that helps.
Solaris and Karma » Question about Payloading
- mtucker
- 4439 posts
- Offline
It's not just that there are four ways of doing it, each of those four ways can be applied to the viewport only (by making changes in the scene graph tree) or applied in the LOP Network (using the Configure Stage node for payload loading and stage population, and Prune for visibility and activation). Why so many choices? Because USD has all these options and one of our main goals with LOPs is to faithfully present the capabilities of USD. I think each one has a use (descending order of both complexity to use and performance benefit):
1. Stage population mask is the scariest and most error prone, but there's nothing like it's performance when you're _sure_ you only want to deal with a small subsection of your scene graph (only show me /world/house5/room6). It cuts out all composition costs and all hydra costs related to prims outside that branch. The downside is that it is an absolute block. You can't author stuff outside that branch. If your materials or lights are defined outside that branch, they don't show up either (unless you also add the locations for those prims to your population mask).
2. Payloads are my favorite. They are easy to understand as “delayed load” or “packed primitives”, especially in 18.5 where you can see their bounding boxes. They also completely prevent all composition and hydra costs associated with unloaded data. But they require you to set up ahead of time where the payloading happens. Fortunately, the answer is almost always “at the level of an asset”. You still pay some cost to compose the ancestor prims of the assets even when a payload is unloaded, but USD is pretty fast, so that's not something to be worried about unless your scene is enormous (millions and millions of separate assets prims).
3. Activation is in many ways a lot like adhoc payloads. Perhaps even more so than the stage population mask. By deactivating a prim you are telling USD and hydra to ignore that prim completely. Sort of like if you unloaded the payload at that point, but without there needing to be a payload defined at that spot. Unlike payloads though, deactivation has to be a manual decision (or if you use a prune LOP, at least a procedural decision). With payloads you start by saying “load nothing except what I tell you”. With activation you have to load everything, then deactivate the things you don't want. I suppose if you build your pipeline around a deactivation workflow instead of a payloading workflow, you could fairly easily mark specific points in your scene graph as “deactivate here” points (similar to how payloads indicate “unload here”), and then have Prune LOP presets to find and deactivate those prims… Anyway, I think I've gone into the weeds a bit here. Like payload and population masking, stuff under a deactivate prim becomes inaccessible for editing.
4. Visibility is safe and simple. It _only_ has meaning to the viewport/hydra, whether you set it in the LOP network or the scene graph tree. It saves nothing in terms of composition costs (which is its downside), but lets you choose what parts to actually render (which is usually the most expensive part, and really the only thing you need to worry about until your scene get very large and complex). When you make prims invisible, the prim and its children are all still “there” in the composition sense, and can be inspected, referenced, and manipulated exactly like visible prims.
Anyway, that's my take on these four options and how they compare. I'd be curious to hear how others think of them…
1. Stage population mask is the scariest and most error prone, but there's nothing like it's performance when you're _sure_ you only want to deal with a small subsection of your scene graph (only show me /world/house5/room6). It cuts out all composition costs and all hydra costs related to prims outside that branch. The downside is that it is an absolute block. You can't author stuff outside that branch. If your materials or lights are defined outside that branch, they don't show up either (unless you also add the locations for those prims to your population mask).
2. Payloads are my favorite. They are easy to understand as “delayed load” or “packed primitives”, especially in 18.5 where you can see their bounding boxes. They also completely prevent all composition and hydra costs associated with unloaded data. But they require you to set up ahead of time where the payloading happens. Fortunately, the answer is almost always “at the level of an asset”. You still pay some cost to compose the ancestor prims of the assets even when a payload is unloaded, but USD is pretty fast, so that's not something to be worried about unless your scene is enormous (millions and millions of separate assets prims).
3. Activation is in many ways a lot like adhoc payloads. Perhaps even more so than the stage population mask. By deactivating a prim you are telling USD and hydra to ignore that prim completely. Sort of like if you unloaded the payload at that point, but without there needing to be a payload defined at that spot. Unlike payloads though, deactivation has to be a manual decision (or if you use a prune LOP, at least a procedural decision). With payloads you start by saying “load nothing except what I tell you”. With activation you have to load everything, then deactivate the things you don't want. I suppose if you build your pipeline around a deactivation workflow instead of a payloading workflow, you could fairly easily mark specific points in your scene graph as “deactivate here” points (similar to how payloads indicate “unload here”), and then have Prune LOP presets to find and deactivate those prims… Anyway, I think I've gone into the weeds a bit here. Like payload and population masking, stuff under a deactivate prim becomes inaccessible for editing.
4. Visibility is safe and simple. It _only_ has meaning to the viewport/hydra, whether you set it in the LOP network or the scene graph tree. It saves nothing in terms of composition costs (which is its downside), but lets you choose what parts to actually render (which is usually the most expensive part, and really the only thing you need to worry about until your scene get very large and complex). When you make prims invisible, the prim and its children are all still “there” in the composition sense, and can be inspected, referenced, and manipulated exactly like visible prims.
Anyway, that's my take on these four options and how they compare. I'd be curious to hear how others think of them…
Solaris and Karma » Flatten layer
- mtucker
- 4439 posts
- Offline
Yep, that's exactly the kind of layer-structure-destroying edit that I was referring to that should not be exposed to every artist
Solaris and Karma » Question about Payloading
- mtucker
- 4439 posts
- Offline
Populate primitives can be used for finer-grain control of which parts to load, but it's a little harder to control because you have to provide a _complete_ list of the things that should load. So if you add a new branch to your scene, you need to update your populate list (even if it doesn't contain any payloads). May or may not be a big deal depending onthe complexity of your scene.
The drawing of bounding boxes for unloaded payloads is a feature that was introduced in USD 20.05, and so will be available in the next major release of Houdini. But it is not, and will not, be available in 18.0.
The drawing of bounding boxes for unloaded payloads is a feature that was introduced in USD 20.05, and so will be available in the next major release of Houdini. But it is not, and will not, be available in 18.0.
Solaris and Karma » Flatten layer
- mtucker
- 4439 posts
- Offline
The Merge node can also do this. And you can have the USD ROP do this at save time. It is an operation currently relegated to the Configure Layer LOP because it's not a concept that I think most users would want front and center as it's own tool. Certainly studios with a well defined layer structure wouldn't want users doing this ever… As always, I'm prepared to be talked out of this position, but that's my thinking on it (and of course you can always create your own tool if you find yourself doing this all the time).
Solaris and Karma » Rigs in USD
- mtucker
- 4439 posts
- Offline
If you're just bringing the USD layout into SOPs for reference purposes (so you can see what you're doing), probably loading the file with a USD Import SOP is the easiest thing to do. If the layout is static, you can load the whole scene as a single USD packed primitive.
If you want more control over loading the scene, you can load it into LOPs. If you then do your animation inside a SOP Create LOP, the USD layout will actually be drawn with a hydra delegate, which will respect payload loading, USD draw modes, and other niceties. But then you're stuck working inside a single SOP Network inside a LOP Network.
If you want more control over loading the scene, you can load it into LOPs. If you then do your animation inside a SOP Create LOP, the USD layout will actually be drawn with a hydra delegate, which will respect payload loading, USD draw modes, and other niceties. But then you're stuck working inside a single SOP Network inside a LOP Network.
-
- Quick Links