Packed geometry memory usage.

   3786   6   0
User Avatar
Member
90 posts
Joined: May 2016
Offline
Hello,

I'm wanting to instance a box primitive onto a point cloud I have. I used the “Copy Stamp” node in H16 and checked “Pack Geometry Before Copying”. However, I'm not sure if I'm missing something because the result takes 5.5GB. The point cloud has 12m points.

Attachments:
056.PNG (1.2 MB)

User Avatar
Member
4189 posts
Joined: June 2012
Offline
5KB per box seems pretty good to me, or might be even 0.5KB per box!
Edited by anon_user_37409885 - March 30, 2017 16:34:42
User Avatar
Member
116 posts
Joined: April 2016
Offline
Depends on the size of your box but honestly the overhead is mostly on the memory each instance takes up and not related to the size of the source object. In my tests Mantra is actually on par with Arnold when using packed prims.

5.5GB for 12mil points actually sounds right. Be aware there is a bug I've encountered with packed prims and materials. If you assign your material before packing your memory usage will go way up, if you assign it after the pack it should be fine, not sure what's going on with that but in 15 it doesn't happen.
Simon van de Lagemaat
owner the Embassy VFX
User Avatar
Member
90 posts
Joined: May 2016
Offline
EDIT: I just realized that with cases I referenced earlier, their poly count is high because their source object is much higher poly than mine. Still, I would like to know if there is a way to I can reduce the memory since I'm actually wanting to instance 100s of millions.

Simon van de Lagemaat2
Depends on the size of your box but honestly the overhead is mostly on the memory each instance takes up and not related to the size of the source object. In my tests Mantra is actually on par with Arnold when using packed prims.

5.5GB for 12mil points actually sounds right. Be aware there is a bug I've encountered with packed prims and materials. If you assign your material before packing your memory usage will go way up, if you assign it after the pack it should be fine, not sure what's going on with that but in 15 it doesn't happen.

Hi Simon, Thank you for the reply. Is there a way to instance more efficiently?

Reading this: http://lesterbanks.com/2016/11/3-billion-polygon-redshift/ [lesterbanks.com] This artist rendered 3billion polygons on Maya+Redshift.

Also here: https://fstormrender.ru/Downloads/FStormMultiScatter.jpg [fstormrender.ru]

According to author: “FStorm instances. 2.500.000.000.000 polygons. 0.89Gb taken. 5mins on 2xGTX980Ti.”
Edited by uuLLFA - March 30, 2017 17:41:37
User Avatar
Member
116 posts
Joined: April 2016
Offline
Polygon count is a bullshit indicator of instancing efficiency. It's about the overhead of the actual instance i.e. I could instance 12 million 6poly boxes or 12 million 10k poly boxes and my memory usage would barely budge. So saying billions or trillions of polygons isn't really that good of a measure. You could swap your boxes with trees and your memory usage wouldn't really change a whole lot.

Mantra can easily achieve the examples you've posted. Just consider the following.

1. Number of unique instances i.e. scattering one source object is cheaper than scattering hundreds of unique source objects so randomize using scale and rotation as much as you can. Find the limit of the human eye to pick out paterns and don't go above it. Use unique sources only as much as you need to. However, You'll be surprised at how many unique sources you can actually use but I suggest finding that out after you've achieved placement and density.

2. Kill every single attribute you don't need on the source and pass as few as you need from the cloud. They will chew up some memory as random values per point are unique and will pile up when you get to the tens of millions count.

3. Cull as many points not in camera view as you can using techniques like camera frustum or points that are occluded by geo in the scene. Use those to create a density attrib on your mesh that can drive the scattering density.
Simon van de Lagemaat
owner the Embassy VFX
User Avatar
Member
90 posts
Joined: May 2016
Offline
Hi Simon, thank you for the feedback. I think the main thing I'm going to have to do is option 3 (culling). What I'm trying to do is a “lego” cityscape using aerial lidar point cloud, so I don't really need variation in source other than shader difference in instanced objects.


Simon van de Lagemaat2
Polygon count is a bullshit indicator of instancing efficiency. It's about the overhead of the actual instance i.e. I could instance 12 million 6poly boxes or 12 million 10k poly boxes and my memory usage would barely budge. So saying billions or trillions of polygons isn't really that good of a measure. You could swap your boxes with trees and your memory usage wouldn't really change a whole lot.

Mantra can easily achieve the examples you've posted. Just consider the following.

1. Number of unique instances i.e. scattering one source object is cheaper than scattering hundreds of unique source objects so randomize using scale and rotation as much as you can. Find the limit of the human eye to pick out paterns and don't go above it. Use unique sources only as much as you need to. However, You'll be surprised at how many unique sources you can actually use but I suggest finding that out after you've achieved placement and density.

2. Kill every single attribute you don't need on the source and pass as few as you need from the cloud. They will chew up some memory as random values per point are unique and will pile up when you get to the tens of millions count.

3. Cull as many points not in camera view as you can using techniques like camera frustum or points that are occluded by geo in the scene. Use those to create a density attrib on your mesh that can drive the scattering density.
User Avatar
Member
4189 posts
Joined: June 2012
Offline
or Clarisse. it will simply chew up anything.
  • Quick Links