Sampling Behavior options on certain 19.5 LOPs

   2119   9   3
User Avatar
Member
240 posts
Joined: Oct. 2014
Offline
I found a small blurb for these new sampling behavior options...

Many LOP nodes can now cache multiple time-samples in a single cook. This reduces the number of Houdini time dependencies, and makes USD exports, renders, and interactivity generally faster. Affected nodes include the Transform and Edit Properties LOPs. The speedup also applies to assets that use Edit Properties.

...but I'm still not sure what they do exactly. Are there any other docs I can refer to?

Is this the equivalent of using a Cache LOP after the node? Do these local 'sampling behavior' options affect how upstream nodes are cooked? I.e. does this affect only this node's operations, or does it also 'cache' upstream data? And if I choose 'Sample Frame Range' does Solaris have to launch a cook against each frame?

I'm really interested to know more about these options as repeated cooking has been an issue for us. Any insight would be helpful.

Thanks!
- Tim Crowson
Technical/CG Supervisor
User Avatar
Member
7803 posts
Joined: Sept. 2011
Offline
Tim Crowson
I found a small blurb for these new sampling behavior options...

Many LOP nodes can now cache multiple time-samples in a single cook. This reduces the number of Houdini time dependencies, and makes USD exports, renders, and interactivity generally faster. Affected nodes include the Transform and Edit Properties LOPs. The speedup also applies to assets that use Edit Properties.

...but I'm still not sure what they do exactly. Are there any other docs I can refer to?

Is this the equivalent of using a Cache LOP after the node? Do these local 'sampling behavior' options affect how upstream nodes are cooked? I.e. does this affect only this node's operations, or does it also 'cache' upstream data? And if I choose 'Sample Frame Range' does Solaris have to launch a cook against each frame?

I'm really interested to know more about these options as repeated cooking has been an issue for us. Any insight would be helpful.

Thanks!

If the node has animation, the node will be cooked for each time sample defined in the range, and the nodes results will be stored as timesample arrays. This way it doesn't cook the input for each time sample nor create a time dependency. So it's not like a cache lop, which is a brute force way, but more like putting a node in a for loop and storing the results in an array.
User Avatar
Member
240 posts
Joined: Oct. 2014
Offline
Thanks! When you say "if the node has animation" does this also include nodes which are not actually animated but are inheriting time-dependence from another node upstream?
- Tim Crowson
Technical/CG Supervisor
User Avatar
Member
7803 posts
Joined: Sept. 2011
Offline
Tim Crowson
Thanks! When you say "if the node has animation" does this also include nodes which are not actually animated but are inheriting time-dependence from another node upstream?

I don't think so, only if the node is animated by sampling upstream animation with an expression of some kind. The parameters have to have animated expressions.

I'm not sure what happens if you have a constant parameter set to 'multiply' the existing value that is already animated with timesamples, or is changing due to time dependency without time samples.
Edited by jsmack - Aug. 9, 2022 12:41:59
User Avatar
Staff
4175 posts
Joined: Sept. 2007
Offline
Just to add a bit more info, the animated nodes are actually only cooking once; the node internally authors multiple time samples, rather than just the current time sample. It can appear similar to a Cache LOP, but the Cache LOP causes upstream time-dependent nodes to re-cook, in order to author multiple time samples.

A node must be specifically written to support the new sampling behavior; any nodes based on Edit Properties support this, as well as the Transform and Volume LOPs (if you see the "Sampling Behavior" spare parms when you create a node, then it's supported).

More info can be found on the Edit Properties [www.sidefx.com] help card.
Edited by goldleaf - Aug. 9, 2022 14:27:48
I'm o.d.d.
User Avatar
Member
240 posts
Joined: Oct. 2014
Offline
Nice, thanks!
- Tim Crowson
Technical/CG Supervisor
User Avatar
Member
26 posts
Joined: June 2022
Offline
hi, additionally to the info in this thread i was curious to if those time samples generated could be written out to a usd file as well?

in my simple test scene i have a cube and ive animated the size parameter.
When i set the sampling behavior to sample frame range and inspect active layer the time samples are present.
using a usd rop to write the cube out - only the current frame time sample gets written out while using the cache lop after the cube writes all of them out. was wondering why that is?

edit: just noticed that by using an edit properties node after the cube and doing my animation there - the sample frame range timesamples get written out.
Edited by spektra - Jan. 11, 2023 05:16:35
User Avatar
Staff
453 posts
Joined: June 2020
Offline
Can you share a hip file? There are some subtleties as to the range of samples that gets generated. The "Sample Frame Range" option, by default, uses @ropstart, @ropend, @ropinc for its range. If you end up writing out only frame 2 in your ROP, only the sample for frame 2 will end up in your USD file.

This is something we're discussing internally - trying to find the right balance between not unexpectedly generating too much or too little data.
User Avatar
Member
26 posts
Joined: June 2022
Offline
ohh i see. so the reason why it worked and i got all the timesamples in my USD was because i deleted the expressions in the start/end/inc fields.

Attachments:
timesamples.hip (118.8 KB)

User Avatar
Staff
453 posts
Joined: June 2020
Offline
spektra
so the reason why it worked and i got all the timesamples in my USD was because i deleted the expressions in the start/end/inc fields.

Looking at your test scene, that would definitely explain it. You could also change the ROP from "Render Current Frame" to "Render Specific Frame Range".

Again, we're reevaluating this design. You're not the first person to be caught out where the stage appears to have all the data you want, but then only a subset actually gets saved to disk.
  • Quick Links