I also tested out 1 million instances to stress test this: A: 2m 30s | 18gigs B: 15m 30s | 18gigs (!!!)
Questions:
I would stick with Workflow A, but I would also like to understand what could be causing the large discrepancy in render times from Workflow B? In my observation, it would seem like Houdini is having a hard time loading the Packed Prims to memory (thus taking too long to prepare the render). As opposed to using the Packed Disk, where it seems to be able to load the prims only as needed.
And lastly, are there alternative workflows that would be much more appropriate for rendering lots of instances?
Thank you for the suggestion Simon, but sadly it did not improve the render time on my machine. Actually it is very close to Workflow B's results.
Workflow C @ 100k instances:
Pack your geometry
Save it out as a bgeo.sc
Load it as a Packed Disk Primitive.
Use Copy to Points.
Render.
Render Time: 1m56s | 2.6gigs
One would think that using Packed Primitives directly should be faster as it probably does not entail the cost of reading data from the disk (as Packed Disk Primitives). Or at least the render times should be close or the same for both workflows.
Reading through the docs again, I found this part:
Whereas Houdini must write the entire geometry for any in-memory geometry into the IFD (the scene description file it sends to Mantra), for packed disk primitives it simply writes the reference to the file on disk. This can make IFDs much faster to generate and smaller on disk for very large/complex scenes.
This probably explains why Packed Primitives seem to eat up much larger memory.
Packed Primitves as described by the docs are “in-memory”. So they do get saved to the IFD. As opposed to using Packed Disk Primitives, only a reference to a file on disk gets saved.