Error stitching multiple frames using USD Stitch Clips Rop

   2707   4   4
User Avatar
Member
2 posts
Joined: 6月 2016
Offline
Hello,

I've been experimenting with USD's Value Clips and have been very impressed - they're excellent for combining per-frame USD files into a single USD which support motion blur, are fast and lightweight when scrubbing the timeslider in the viewport and support retiming and time-offsetting. An ideal solution for rendering large FX caches.

I've however ran into a problem when trying to combine many large hair simulation frame caches - each around 200MB - the USD Stitch Clip ROP works fine when stitching less than about 100 frames but errors out when combining more than that. Displaying an error "Couldn't map asset...."

I'm working on Windows and after diving into the code have noticed that this might be a problem with the UsdUtils.StitchClips code.

Is there any fix for this? It would be hugely disappointing to have to move away from USD value clips just because of this error.

Obviously I can't supply the entire file sequence I'm trying to cache - 360 frames of 200MB hair. I've supplied the error message I get.

Attachments:
stitchError_02.jpg (268.5 KB)

User Avatar
スタッフ
4445 posts
Joined: 7月 2005
Offline
This has been addressed elsewhere in the forum, if you want to do some searching. But the basic problem is how Windows handles "memory mapped files". Even though USD isn't asking Windows to load most of the data from the per-frame USD files into memory, Windows is applying the full size of the USD file against the system memory budget. Here are a few things you might want to consider:
1. Increase the available Windows virtual memory by making your swap file larger (again, these USD files aren't actually getting loaded into memory, so this additional swap space won't ever actually be used, but Windows insists on this extra virtual memory space being "available").
2. Decrease the size of the per-frame USD files. Often the first attempt users make with value clips they do the simplest possible thing, and put all the data for each frame into each per-frame USD file. This means each per-frame file contains all the information that doesn't vary frame to frame (for example, topology is probably constant, if you're dealing with hair). If your hair geometry is coming from SOPs, you can use one SOP Import to pull in the time-independent data and write it out to one file, and a second SOP import that brings in _only_ the time-varying data, and write that data out to the per-frame USD files you'll use to make the value clip. This not only lets you create your value clip without running out of virtual memory, but also saves on disk space and file write times. Of course you may have already done this kind of optimization and be hitting this limit anyway.
3. Break your geometry into smaller pieces, where each "section" of hair is small enough that you can build the value clip.
4. Find a way to do the value clip generation on a linux system. Unix virtual memory does not have this same problem that Windows virtual memory has. Windows systems should have no problem _consuming_ a large value clip. It's only the authoring step where all the per-frame USD files are opened at once.
5. There may also be some kind of clever approach you could use involving writing out the value clips in smaller time chunks (frame 1-100, 101-200, 201-300, etc), and then merging the resulting value clip metadata. But that would require some python scripting.
User Avatar
Member
2 posts
Joined: 6月 2016
Offline
Option #5 sounds like the most interesting approach to me.

This wouldn't be a million miles from the hack I'm using already - using the Stitch Clips ROP to create a clip of the first 10 frames (works fine with just 10, even on Windows....) then in a separate folder create a dummy set of empty USDs with the full framerange and with the same stage hierarchy (just empty xform and scope primitives as placeholders), stitching them into a value clip, then moving the single template usd file created back to the original folder. Seems to work perfectly - the manifest and topology files created I assume are identical whether it's a frame range of 1001-1010 or 1001-1360. The stitch clip just points to relative asset paths.

My next step was to take this a step further by editing the original Stitch Clip metadata in a Wrangle LOP, but ran into a wall. I noticed the primitive being stitched has 'clips' metadata named 'active', 'assetPaths' and 'times' which I hoped to be able to modify using VEX to stretch the frame range out to point at all the 360 frames. I was able to do this for the 'active' and 'times' data using something similar to the below code:

for(i=0;i<len;i++){
usd_setmetadataelement(0, @primpath, 'clips:default:active', i, set(1001+i, i));
}

This doesn't work for 'assetPaths' metadata unfortunately, as these seem to need to be written in the special Sdf.AssetPaths format to be resolved properly. Is there a way of editing the Asset Paths data in the Python LOP? The pxr.Sdf module doesn't seem to be documented in Python, at least not to do what I would like to use it for.

Also, after re-reading, your Option #2 does make sense to reduce the file size even further. The time varying data for this hair are only 'points', 'extent' and 'velocities'. But even with just those written out, I still get large individual file sizes.

Attachments:
fur_clipValue_metadata.jpg (96.4 KB)

User Avatar
Member
658 posts
Joined: 8月 2013
Offline
Hi Mark. Would you be able to mock up a quick example of iption 2 please as that sounds very useful. I am stripping out uvs and then sublayering in the uv static version at the top. But anything else I can do would be super helpful. Best. Mark
User Avatar
Member
129 posts
Joined: 10月 2020
Offline
solved it by restating houdini in Manual mode then caching, apparently the "freed memory" helped!
https://www.youtube.com/channel/UC4NQi8wpYUbR9wLolfHrZVA [www.youtube.com]
  • Quick Links