creating thousands of nodes, memory / speed?

   6922   8   3
User Avatar
Member
84 posts
Joined: Jan. 2013
Offline
Hi,

a question about how houdini handles nodes (hda + subcontent) and their memory? In my application I am creating thousands of nodes of the same type based on an hda (mostly wrappers for filein nodes + some conversion nodes). This is done via python. What I see is the memory used for this being very high (20gigs for 5000 instances etc) and processes like collpasing, merging those nodes into an otl taking litterally hours.

I am wondering:
- how are all of those copies of the same node being managed in the scene? instances of the same type with different parameters? indiviual node networks with subnets?
- what is causing the high memory load and time requirements? is there a suggested way of creating and dealing with 1000nds of nodes?
- after refreshing the scene (new) it also seems that most of that memory is not released - might this be an indication of a leak?

Is there a suggested way to a) structure your hda's so they are quick to instantiate etc. b) create your nodes and c) manage them in your scene (visibility flags, undo queue etc., expressions within the node etc)

Any previous experiences or hints on how to manage this amount of nodes best would be appreciated!

Thanks,
Carsten
carsten kolve - ds @ image engine
User Avatar
Member
2624 posts
Joined: Aug. 2006
Offline
a question about how houdini handles nodes (hda + subcontent) and their memory? In my application I am creating thousands of nodes of the same type based on an hda (mostly wrappers for filein nodes + some conversion nodes). This is done via python. What I see is the memory used for this being very high (20gigs for 5000 instances etc) and processes like collpasing, merging those nodes into an otl taking litterally hours.
,Python by is very nature is going to loop through everything in the scene. thousands of nodes will take ages. It would be interesting to see a simple example of your approach to instancing. I do know loading in geometry from disk into the view port will grind. under the file sop I tend to load just the bounding information and using a simple box to visualize whats going on.

Rob
Gone fishing
User Avatar
Member
7715 posts
Joined: July 2005
Online
carstenk
- how are all of those copies of the same node being managed in the scene? instances of the same type with different parameters? indiviual node networks with subnets?

Each node type instance is separate. They have independent parameters, cook output, etc.

- what is causing the high memory load and time requirements? is there a suggested way of creating and dealing with 1000nds of nodes?

It's far more likely that this is due to the geometry loaded than the memory taken up by the nodes themselves. I've used scenes with > 4000 nodes take up less than 300 megs.

(top-level) HDAs, VOPs, and SHOPs do take up extra memory than normal though because they contain extra data within their definitions. However, it's not clear to me how many of those you have in your scene.

- after refreshing the scene (new) it also seems that most of that memory is not released - might this be an indication of a leak?

Not really. Houdini's memory allocator is unlikely to return the RAM back to the OS immediately. So unless your session takes up *another* 20 gigs when you load the .hip file back in again.

Is there a suggested way to a) structure your hda's so they are quick to instantiate etc. b) create your nodes and c) manage them in your scene (visibility flags, undo queue etc., expressions within the node etc)

I think this is hard to say without more information of what you're trying to accomplish. If you're loading in 5000 separate pieces of different geometry, then there's little you can do.

One thing to test is to save you .hip file, and then reload it with “houdini -n file.hip” so that no cooking is done. Check your memory usage again. This will tell you whether those 20 gigs is due to the nodes themselves, or the geometry being generated from those nodes.
User Avatar
Member
84 posts
Joined: Jan. 2013
Offline
Hi Rob and Edward,

Thanks for your replies! The nodes in questions are containers for separate geometry (actually multiple representations of the same geometry, think lods etc.). Simplified, they do contain lots of file in node pointing to potentially animated geometry and processes can merge data from various OUT nulls inside. They would typically not merge from all of them, but a user defined subset.

the object level node itself has a non connected null node inside with display flag and render flag ‘on", the node it is set to invisible. So I am assumming, that none of the file in nodes are actually evaluating or loading data, because nothing is pulling data from them?

it’s good to know that all of my nodes are actually separate node networks! I was under the assumption that they were more treated like instances - but this means I should really look into reducing the amount of nodes inside as much as possible. Is the same true for vex/vops type nodes/networks (vopsops etc.)?

PS: I've still got to try the nocook option to see if it makes a difference
carsten kolve - ds @ image engine
User Avatar
Member
7715 posts
Joined: July 2005
Online
carstenk
the object level node itself has a non connected null node inside with display flag and render flag 'on", the node it is set to invisible. So I am assumming, that none of the file in nodes are actually evaluating or loading data, because nothing is pulling data from them?

Maybe. When the object's visibility is turned off, then the viewport won't cause it to cook. However, that's not to say that you don't have some expressions in some other displayed objects that pulls on that data.
User Avatar
Member
2624 posts
Joined: Aug. 2006
Offline
carstenk
Hi Rob and Edward,

So I am assumming, that none of the file in nodes are actually evaluating or loading data, because nothing is pulling data from them?

it's good to know that all of my nodes are actually separate node networks! I was under the assumption that they were more treated like instances - but this means I should really look into reducing the amount of nodes inside as much as possible. Is the same true for vex/vops type nodes/networks (vopsops etc.)?

PS: I've still got to try the nocook option to see if it makes a difference

Hi,
Reduction, simplification would be the way to go > preference for me transform container at the object level followed by a object container with the geometry. Having the display flag off at the top level will certainly stop the viewpoint cooking.
My current situation is dealing with an asset containing 159 objects > 1.5 million points per object. To make things easy I tag all my instance points with data that can be applied on a per instance basis each instance point number just uses the same asset number ie point _01 gets data from model_01 . It makes things more manageable for sure !

Rob
Gone fishing
User Avatar
Member
263 posts
Joined: Oct. 2010
Offline
Hi, I'm working with carsten.

In the end, it looks like it was just the sheer number of nodes - nothing was ever getting cooked. We need to instantiate hundreds (or perhaps thousands) of HDAs which themselves contain a few other nodes. It turns out though that one of those internal nodes was a moderately complicated VOPSOP, containing ~75 VOP nodes. It seems like these internal VOP nodes also contribute to the total node memory usage budget, so that in the end we were having hundreds of thousands of nodes in total, including all sub children.

To reduce the total number of nodes, we compiled this VOPSOP to a VEX Sop, which reduced our time spent creating, and memory usage, to a third of what it was originally.
User Avatar
Member
581 posts
Joined: July 2005
Offline
mattebb
Hi, I'm working with carsten.

In the end, it looks like it was just the sheer number of nodes - nothing was ever getting cooked. We need to instantiate hundreds (or perhaps thousands) of HDAs which themselves contain a few other nodes. It turns out though that one of those internal nodes was a moderately complicated VOPSOP, containing ~75 VOP nodes. It seems like these internal VOP nodes also contribute to the total node memory usage budget, so that in the end we were having hundreds of thousands of nodes in total, including all sub children.

To reduce the total number of nodes, we compiled this VOPSOP to a VEX Sop, which reduced our time spent creating, and memory usage, to a third of what it was originally.
Mmmm the buggy Vop Sop again.
I have recently had a vey weird issue with some HDAs that
took a long time to load and after sending it to support seems that
the problem was related to a bug in the Vop Sop operator.
This was in 12.0 and 12.1, seems that 12.5 has improve the situation
but I haven't tried yet.
Un saludo
Best Regards

Pablo Giménez
User Avatar
Member
345 posts
Joined:
Offline
Using vop sop in otls is generally not recommended, especially when using many copies of the same asset. As Matt said above compiling to vex sop is the way to go. Every time the scene is loaded, Houdini compiles all vop sops nodes into vex and this can take a while. Same goes with shaders. If you have tons of materials it may be reasonable to compile them too.

kuba
  • Quick Links