Best practice caching heavy geo in Houdini to use in Maya

   2090   2   0
User Avatar
Member
1 posts
Joined: April 2009
Offline
I'm working on the destruction of a very large poly count model. (Will be around 1-2 million points at least).

I've found that the usual File cache bgeo.sc method is quick and easy to load back, with decent responsiveness.

This of course is no good for Maya. I've looked at FBX export as UVs need to be maintained, and also alembic caching which ends up with huge file sizes, and usually brings Maya to a hault.

I've also looked at caching everything through the bgeo.sc method, then splitting out smaller groups of geo from the simulation and caching those separately as alembics

Just wondering if anyone can point me in the right direction on what the best practice is for this type of thing.

Thanks in advance
User Avatar
Member
8532 posts
Joined: July 2007
Offline
I'd go with alembic or usd

andyhowell77
This of course is no good for Maya. I've looked at FBX export as UVs need to be maintained, and also alembic caching which ends up with huge file sizes, and usually brings Maya to a hault.

you are probably exporting packed prims which will create object per each piece and that has a great overhead to load in Maya
try unpacking and cleaning up unnecessary attributes before exporting as .abc
you can also create any hierarchy you want using @path attibute if you need some segmentation, but per piece is definitely too much

if your sim has constant topology throughout, it may be ok to export as a single .abc file, it will be large, but will share the topology across frames and just update varying attribtues like P
and in the end it may be smaller than sequence of .bgeo.sc files since they have to have topology information in every file
also it will interpolate between frames

or you can also export .abc per frame in which case it will be similar to sequence of .bgeo.sc files, but in that case you may want to keep v attribute for motion blur since you will lose native alembic interpolation
Edited by tamte - March 22, 2022 11:44:46
Tomas Slancik
FX Supervisor
Method Studios, NY
User Avatar
Member
236 posts
Joined: March 2013
Offline
As Tomas has said, if the point count is constant (you're not birthing rbd pieces into the sim), then you can
cache as a single alembic with transformations, the file will be small, but on load into Maya you will still need
to pull the entire thing in, which can have a bit of an overhead. The good news about this method is that you can
do deformation blur, as the geometry is all there in one file, and the alembic standard has linear interp single sample
built in.

Per frame alembic is the more preferred method, as each frame that needs to render only pulls in that alembic frame, and
when caching heavy things to disk in houdini, per frame tends to not crash compared to the occasional crash with one
big monolithic alembic. As Tomas said, you need point v to be stored on the pieces!

Unpack the geo, and delete all attributes and groups you don't need in maya, that will help lighten it up.

You can always take a look at multiverse from J Cube, a full suite of tools for Maya to manage all this stuff, it
will load those files at the speed of light, and you can use any attributes stored on the geo for shading.
It supports per frame alembics, but also USD. So you could go down the USD point instancing rabbit hole, and create nice
lightweight caches that are just the fractured geometry pieces, and a small point cache used to transform them.

Matt's tokeru site has a good walkthrough and hip file that shows how to setup the USD stuff. It can seem like overkill,
but if you have to jam RBD stuff into Maya for rendering, USD point instancing + multiverse is a life changing solution.


L
Edited by lewis_T - March 22, 2022 19:04:10
I'm not lying, I'm writing fiction with my mouth.
  • Quick Links