Hey,
I’ve taken a rest from Solaris after being busy on projects (and playing around with Katana on said projects) and i’m now back trying to find a good workflow with Solaris / USD.
And i’m running up against the same issues I had a year ago - I can build a pretty efficient file until i’m working with something that has a large-ish hierarchy (lots of instanced geometry for example, or something with lots of pieces living beneath it).
Coming from Katana, I would be able to load up a scene, and Katana would stay light, and avoid loading any of the hierarchy until I expanded it, with the bounding box breaking into more detail as I expanded down the hierarchy. When I had a heavy scene where I didn’t need to see the heavy section to do my work, it was a massive help.
Is there meant to be a similar workflow to this in Solaris? Payloads seem to be the answer on paper but they don’t really seem to do this, and instead rely on the user to think ahead before loading the asset.
Working with heavy scenes like in Katana
5774 11 5-
- mrSmokey
- Member
- 26 posts
- Joined: 5月 2019
- オフライン
-
- mtucker
- スタッフ
- 4559 posts
- Joined: 7月 2005
- オフライン
Payloads are definitely the simplest answer (though USD provides other facilities such as render purposes). What is it about the payloads workflow that you feel is lacking? The main difference I'm aware of is that in LOPs, the user must explicitly specify in the scene graph tree which parts of the scene to load rather than automatically loading stuff as the tree is expanded. Is this what you mean by "they don't really seem to do this"? Or is there something else that isn't working for you?
-
- Tim Crowson
- Member
- 250 posts
- Joined: 10月 2014
- オフライン
If you're coming from Katana (like I was), then your focus is probably on lookdev/lighting/rendering. For lighting and rendering, Payloading has been of limited use, since the viewport will only render loaded payloads. Disk rendering via husk is fine (unloaded payloads will render fine to disk). But this means that lighters derive no real benefit from payloads... since unloaded payloads don't show up in viewport rendering, so we can't see what our lights are doing. That said, on paper payloads are still the way to go, so I wouldn't shy away from implementing payloading at a high level. Just that payloads will likely have to all be loaded by default for lighting.
For viewport display, you might take a look at using USD's 'purpose' paradigm, which lets you specify whether a mesh is intended for viewport display only, render purpose only, or guide purpose only. One common use is to have two primitives, a high res (as 'render' purpose), linked to a low-res ('proxy' purpose). The proxy prim draws in the viewport, but the render prim is what appears in the render. But you can also tag geometry as simply render purpose only, without linking to a proxy prim. This will prevent the geo from showing up in the viewport until you render (or unless you enable Render purpose drawing in the viewport options). This can be helpful if you have a massive environment but don't actually need to see it in the viewport.
But here is the real catch: configuring these display purposes, or indeed applying any kind of modification via LOP nodes, on prims in a complex hierarchy (say 100,000 primitives) will seriously hurt cooking performance. It stands to reason of course... the more modifications you make via LOP nodes, the more nodes to process, the longer the cook. That makes sense. But in my experience so far, the time it takes to cook a Solaris network is considerably longer than the equivalent modification in Katana. Nonetheless, Solaris's raw features and flexibility are extremely attractive from a production standpoint! Hopefully SideFX can improve performance, or at least add more options for controlling when the graph recooks (the existing options are a bit heavy-handed).
There may also be procedural solutions available, depending on your renderer, like Redshift's upcoming USD Procedural, which would allow you to offload large chunks of the stage and reduce the number of primitives your LOP nodes have to wrangle. Procedurals come with trade-offs, but they have their uses.
It's also easy to trigger lengthy recooks, which leads us to micromanaging the session with careful use of pins and display flags and hotkeys, etc. "That's just how Houdini is" I keep being told. I absolutely love the features and flexibility of Solaris. It's seriously exciting. But I really miss the workhorse performance and focused UX of Katana.
Solaris is still pretty new, all things considered, so I'm really psyched to see where SideFX take it. And I'm open to any suggestions or insights anyone has on improving performance or best practices, etc.
For viewport display, you might take a look at using USD's 'purpose' paradigm, which lets you specify whether a mesh is intended for viewport display only, render purpose only, or guide purpose only. One common use is to have two primitives, a high res (as 'render' purpose), linked to a low-res ('proxy' purpose). The proxy prim draws in the viewport, but the render prim is what appears in the render. But you can also tag geometry as simply render purpose only, without linking to a proxy prim. This will prevent the geo from showing up in the viewport until you render (or unless you enable Render purpose drawing in the viewport options). This can be helpful if you have a massive environment but don't actually need to see it in the viewport.
But here is the real catch: configuring these display purposes, or indeed applying any kind of modification via LOP nodes, on prims in a complex hierarchy (say 100,000 primitives) will seriously hurt cooking performance. It stands to reason of course... the more modifications you make via LOP nodes, the more nodes to process, the longer the cook. That makes sense. But in my experience so far, the time it takes to cook a Solaris network is considerably longer than the equivalent modification in Katana. Nonetheless, Solaris's raw features and flexibility are extremely attractive from a production standpoint! Hopefully SideFX can improve performance, or at least add more options for controlling when the graph recooks (the existing options are a bit heavy-handed).
There may also be procedural solutions available, depending on your renderer, like Redshift's upcoming USD Procedural, which would allow you to offload large chunks of the stage and reduce the number of primitives your LOP nodes have to wrangle. Procedurals come with trade-offs, but they have their uses.
It's also easy to trigger lengthy recooks, which leads us to micromanaging the session with careful use of pins and display flags and hotkeys, etc. "That's just how Houdini is" I keep being told. I absolutely love the features and flexibility of Solaris. It's seriously exciting. But I really miss the workhorse performance and focused UX of Katana.
Solaris is still pretty new, all things considered, so I'm really psyched to see where SideFX take it. And I'm open to any suggestions or insights anyone has on improving performance or best practices, etc.
Edited by Tim Crowson - 2021年6月3日 12:51:00
- Tim Crowson
Technical/CG Supervisor
Technical/CG Supervisor
-
- Tim Crowson
- Member
- 250 posts
- Joined: 10月 2014
- オフライン
Oh and the other big difference between Payloads and Katana's native deferred loading is this: In legacy Katana (at least prior to their USD implementation), the deferred loading is done at the primitive level... meaning each individual object, can be loaded or not. But with USD Payloads, the loading/unloading is done on the primitive that is payloaded. So if you payload a vehicle that has 500 mesh prims, the loading/unloading is done to the parent prim, not the individual mesh prims. Which means you have to load all the meshes in the vehicle, or none.
- Tim Crowson
Technical/CG Supervisor
Technical/CG Supervisor
-
- mrSmokey
- Member
- 26 posts
- Joined: 5月 2019
- オフライン
Thanks for the response guys.
To make it clear there are two ideal scenarios:
1. Katana how I experienced it (this was without USD through Katana). If Solaris worked this way it would be pretty cool too:
> I have a tree which has two hierarchies of geometry:
- The Trunk (+ branches) = This is light geometry/hierarchy.
- The Foliage (leaves + flowers + fruit) = This is heavy geometry/hierarchy.
> When I load the asset into Katana I just see one big bounding box. My scene is super light and responsive because it hasn’t loaded much in.
> I expand that box down one step, and now see two bounding boxes (one for the trunk and one for the foliage).
> I want to have a better view of the tree and its geometry, so I totally expand the Trunk section. I know that the Foliage hierarchy is the heavy one, so I just don’t expand it. My scene stays light and responsive and I can still work in the viewport, place lights, or geometry, whatever.
> I hit render, and everything renders - both the Trunk and Foliage, because this was all just about keeping the viewport and usd manipulation experience light, but the final render step knows that I want to expand everything.
> If I don’t want the Foliage section to slow down my render, I drop a prune which will remove that section from the actual USD tree/render.
2. How I imagined I could work in Solaris after reading about payloads, but it doesn’t seem to work this way:
- The trunk isn’t so heavy, so I specify at the asset authoring step that this should load in completely. This is something that just happens by default when I plug it in to my USD structure of my asset using something like a reference node.
- The Foliage is heavy, so I specify that this shouldn’t load in all at once. I specify this somewhere, in a simple and obvious way, such as by setting the “reference type” to “Payload from Multi Input”, or maybe I drop down a configure primitive node (or configure layer node I guess)
> I write out that USD of the asset to disk and it exists as a USD asset “Tree” with two nested hierarchies beneath it “Trunk” (Reference as normal) and “Foliage” (Payload).
> In the lighting step someone who doesn’t know that i’ve set these settings loads in the tree USD that I wrote out and plugs it into their working stage. Because I specified that the Foliage is a payload, it doesn’t load in the viewport, but the Trunk does, so they still get a pretty good representation of the Tree while having a responsive scene.
> The Scene Graph makes it clear that this is a payload and so the lighting artist knows why they can’t see the Foliage.
> They hits render and it renders the Trunk + Foliage.
> If they really want to see the Foliage in viewport they can tell it to expand/load in that geo, or can turn on “Load All Payloads in Viewport” and puts up with the 1 minute wait while it loads it all in, and the unresponsive viewport
Alternatively, being able to say, load the Foliage in as simply a bounding box of just the very top group of the hierarchy instead of a payload would be another workable alternative.
I’ve been trying to find a way to do what I want for a few hours of trial and error now, but maybe I just don’t know the right button (if thats the case... it feels like this setting should be more accessible?). Or maybe I’m just getting lost in the complexities of USD.
To make it clear there are two ideal scenarios:
1. Katana how I experienced it (this was without USD through Katana). If Solaris worked this way it would be pretty cool too:
> I have a tree which has two hierarchies of geometry:
- The Trunk (+ branches) = This is light geometry/hierarchy.
- The Foliage (leaves + flowers + fruit) = This is heavy geometry/hierarchy.
> When I load the asset into Katana I just see one big bounding box. My scene is super light and responsive because it hasn’t loaded much in.
> I expand that box down one step, and now see two bounding boxes (one for the trunk and one for the foliage).
> I want to have a better view of the tree and its geometry, so I totally expand the Trunk section. I know that the Foliage hierarchy is the heavy one, so I just don’t expand it. My scene stays light and responsive and I can still work in the viewport, place lights, or geometry, whatever.
> I hit render, and everything renders - both the Trunk and Foliage, because this was all just about keeping the viewport and usd manipulation experience light, but the final render step knows that I want to expand everything.
> If I don’t want the Foliage section to slow down my render, I drop a prune which will remove that section from the actual USD tree/render.
2. How I imagined I could work in Solaris after reading about payloads, but it doesn’t seem to work this way:
- The trunk isn’t so heavy, so I specify at the asset authoring step that this should load in completely. This is something that just happens by default when I plug it in to my USD structure of my asset using something like a reference node.
- The Foliage is heavy, so I specify that this shouldn’t load in all at once. I specify this somewhere, in a simple and obvious way, such as by setting the “reference type” to “Payload from Multi Input”, or maybe I drop down a configure primitive node (or configure layer node I guess)
> I write out that USD of the asset to disk and it exists as a USD asset “Tree” with two nested hierarchies beneath it “Trunk” (Reference as normal) and “Foliage” (Payload).
> In the lighting step someone who doesn’t know that i’ve set these settings loads in the tree USD that I wrote out and plugs it into their working stage. Because I specified that the Foliage is a payload, it doesn’t load in the viewport, but the Trunk does, so they still get a pretty good representation of the Tree while having a responsive scene.
> The Scene Graph makes it clear that this is a payload and so the lighting artist knows why they can’t see the Foliage.
> They hits render and it renders the Trunk + Foliage.
> If they really want to see the Foliage in viewport they can tell it to expand/load in that geo, or can turn on “Load All Payloads in Viewport” and puts up with the 1 minute wait while it loads it all in, and the unresponsive viewport
Alternatively, being able to say, load the Foliage in as simply a bounding box of just the very top group of the hierarchy instead of a payload would be another workable alternative.
I’ve been trying to find a way to do what I want for a few hours of trial and error now, but maybe I just don’t know the right button (if thats the case... it feels like this setting should be more accessible?). Or maybe I’m just getting lost in the complexities of USD.
Edited by mrSmokey - 2021年6月3日 13:25:24
-
- mtucker
- スタッフ
- 4559 posts
- Joined: 7月 2005
- オフライン
-
- Tim Crowson
- Member
- 250 posts
- Joined: 10月 2014
- オフライン
I don't know about smokey, but in our experience so far #2 would fail if the artist starts a viewport render, because it will not render unloaded Payloads.
I.e. this would not actually happen:
To see the foliage in the Scenegraph but not in the viewport, he'd either have to define it as an unloaded payload, or as prim with Render purpose only. But unless the payload for the foliage is actually loaded, it will not render in the viewport (at least not in the current builds of Houdini).
I.e. this would not actually happen:
> They hits render and it renders the Trunk + Foliage.
To see the foliage in the Scenegraph but not in the viewport, he'd either have to define it as an unloaded payload, or as prim with Render purpose only. But unless the payload for the foliage is actually loaded, it will not render in the viewport (at least not in the current builds of Houdini).
Edited by Tim Crowson - 2021年6月3日 15:23:27
- Tim Crowson
Technical/CG Supervisor
Technical/CG Supervisor
-
- mtucker
- スタッフ
- 4559 posts
- Joined: 7月 2005
- オフライン
-
- mrSmokey
- Member
- 26 posts
- Joined: 5月 2019
- オフライン
Hey guys, thanks for that help. Sorry for not being clear - “render” for me meant “render an image in arnold or karma” but I see you worked it out 
I played around after reading what you had to say and yes, purposes is what I’m after. It works pretty good and I set up a cool little proxy geo display of my foliage for the viewport using vdbs. That’s fun!
Now, I’m going to rant a little below (sorry) but truly its in the name of trying to understand whether I’m “getting it” or not. To me, keeping the user experience pleasant and fluid with heavy assets is a key aspect to working efficiently so I would love to hear what other people think on this topic and best practices...
- Purposes:
> Using a render/proxy setup is quite nice to have and is the best case option in many scenarios, and yet this is so many more steps than the 0 steps that katana expects when it gives me a basic bounding box on any asset loaded by default. I find this a bit of a shame.
> Setting a purpose seems to be this very hidden option - even the “Edit Properties” node (not to be confused with the “Configure Properties” node) doesn’t expose purpose, so strangely I have to click a button to open a window, to move the purpose parameter node into the UI, then click the dropdown... I find myself wondering why such a useful option isn’t more accessible?
- Payloads:
> How do I get houdini to act on this reference type? By default it seems that Solaris (in H18.5.351) won’t do anything with payloads unless the user changes the load masks to untick “Load all payloads” in viewport. I don’t see why the end user should have to pre-emptively change a default option to take advantage of optimisations someone has taken the time to set up earlier in the production.
> What is the intended workflow to best take advantage of payloads? The way I see it, I have heavy geometry that I don’t want to slow the user down with when working in the viewport (doing layout, or blocking out lights, etc) - cool - but then the artist needs to get from there to seeing a rendered image with the payload geometry visible...
> So at some point, the user eventually needs to disable payloads on the asset (how? is this only acceisbile through the load mask option?) and then wait for their computer to slowly load in the very heavy geometry, before hitting render image, finding the light placement was wrong, turning off payloads, moving lights, turning them back on... you get the point. Is this right or there is a missing piece to the puzzle?
> Why use payloads instead of a “prune” node approach to keeping scenes light?
Cheers for the help so far on this stuff!

I played around after reading what you had to say and yes, purposes is what I’m after. It works pretty good and I set up a cool little proxy geo display of my foliage for the viewport using vdbs. That’s fun!
Now, I’m going to rant a little below (sorry) but truly its in the name of trying to understand whether I’m “getting it” or not. To me, keeping the user experience pleasant and fluid with heavy assets is a key aspect to working efficiently so I would love to hear what other people think on this topic and best practices...
- Purposes:
> Using a render/proxy setup is quite nice to have and is the best case option in many scenarios, and yet this is so many more steps than the 0 steps that katana expects when it gives me a basic bounding box on any asset loaded by default. I find this a bit of a shame.
> Setting a purpose seems to be this very hidden option - even the “Edit Properties” node (not to be confused with the “Configure Properties” node) doesn’t expose purpose, so strangely I have to click a button to open a window, to move the purpose parameter node into the UI, then click the dropdown... I find myself wondering why such a useful option isn’t more accessible?
- Payloads:
> How do I get houdini to act on this reference type? By default it seems that Solaris (in H18.5.351) won’t do anything with payloads unless the user changes the load masks to untick “Load all payloads” in viewport. I don’t see why the end user should have to pre-emptively change a default option to take advantage of optimisations someone has taken the time to set up earlier in the production.
> What is the intended workflow to best take advantage of payloads? The way I see it, I have heavy geometry that I don’t want to slow the user down with when working in the viewport (doing layout, or blocking out lights, etc) - cool - but then the artist needs to get from there to seeing a rendered image with the payload geometry visible...
> So at some point, the user eventually needs to disable payloads on the asset (how? is this only acceisbile through the load mask option?) and then wait for their computer to slowly load in the very heavy geometry, before hitting render image, finding the light placement was wrong, turning off payloads, moving lights, turning them back on... you get the point. Is this right or there is a missing piece to the puzzle?
> Why use payloads instead of a “prune” node approach to keeping scenes light?
Cheers for the help so far on this stuff!
-
- Tim Crowson
- Member
- 250 posts
- Joined: 10月 2014
- オフライン
-
- mtucker
- スタッフ
- 4559 posts
- Joined: 7月 2005
- オフライン
Once you've loaded the payload into the renderer, I don't know why you'd unload it again just to move the light... Moving a light is a pretty simple update for most renderers so they should respond very quickly to such a change. Or you can switch back to Houdini GL to move the light around (Houdini GL should still be fast even with the payload loaded if you're using render/proxy purposes).
You are right that loading the payload is a choice that only lives in the scene graph tree/viewport (just like in Katana, right?). This is generally desirable I think because it means that when it comes time for a final render (with a USD Render ROP), you never have to worry that your payloads won't be loaded.
Initial creation of an asset is the moment when render/proxy needs to be set up for things to work smoothly. It is not something a lighter or environment artist should ever have to think about. In general we'd recommend that you have a standardized template/workflow for creating assets that makes it easy to set up render and proxy geometry. The fact that Houdini doesn't provide such a template/workflow natively is a big hole that we are currently working on addressing for H19. Because you're right, this should be easy to do without needing a pipeline department and TDs to set up these templates at each studio.
You are right that loading the payload is a choice that only lives in the scene graph tree/viewport (just like in Katana, right?). This is generally desirable I think because it means that when it comes time for a final render (with a USD Render ROP), you never have to worry that your payloads won't be loaded.
Initial creation of an asset is the moment when render/proxy needs to be set up for things to work smoothly. It is not something a lighter or environment artist should ever have to think about. In general we'd recommend that you have a standardized template/workflow for creating assets that makes it easy to set up render and proxy geometry. The fact that Houdini doesn't provide such a template/workflow natively is a big hole that we are currently working on addressing for H19. Because you're right, this should be easy to do without needing a pipeline department and TDs to set up these templates at each studio.
-
- mrSmokey
- Member
- 26 posts
- Joined: 5月 2019
- オフライン
Thanks for the info!
On the interactivity side, I had found with my foliage payload visible any manipulation of light or object positions became pretty laggy even in the Houdini GL mode, but there’s probably a good chance i’m doing something, somewhere wrong my USD setup.
I think I definitely need to dig around a bit more and experiment with things on my side, but its good to hear that things should get easier eventually.
The main thing that I loved from my experience with Katana was being able to selectively expand objects as you required in the viewport/openGL view - it wasn’t an all on, all off switch. As a lighter, working on heavy scenes, being able to say “Right now, I need to see this characters’ geometry in the viewport but I only need bounding boxes down to this hierarchy step for the environment” was a great option to have and I guess I’m trying to find a similar workflow in Solaris
On the interactivity side, I had found with my foliage payload visible any manipulation of light or object positions became pretty laggy even in the Houdini GL mode, but there’s probably a good chance i’m doing something, somewhere wrong my USD setup.
I think I definitely need to dig around a bit more and experiment with things on my side, but its good to hear that things should get easier eventually.
The main thing that I loved from my experience with Katana was being able to selectively expand objects as you required in the viewport/openGL view - it wasn’t an all on, all off switch. As a lighter, working on heavy scenes, being able to say “Right now, I need to see this characters’ geometry in the viewport but I only need bounding boxes down to this hierarchy step for the environment” was a great option to have and I guess I’m trying to find a similar workflow in Solaris

Edited by mrSmokey - 2021年6月4日 16:28:54
-
- Quick Links