So I figured out what is causing this and wanted to share for future reference.
This is caused by the surface field that is generated from the narrow band during the primary sim.
The sdf surface from the primary sim has limited information - because of the narrow band. The information is largely stored near the surface as defined by the banding of the vdb.
My solution to get this working was not very elegant, in the sense that I ended up converting the primary fluid surface to polygons, then compute a ‘vdb from polygons’ with “fill interior” turned on and at least 1 meter in banding. This is because the depth of the whitewater solver by default is set to 0.8 (which should be covered by the 1 meter bands - mostly for the splashes above the surface - as I have complete information for the underwater portion as that is covered by the ‘fill interior’).
This provided me with the necessary depth data that I needed. Luckily this can be done as a post process, so you don't have to resim your primary fluid. After that it largely worked as expected.
Not sure if there is a more elegant way, but yeah, careful with high resolution (voxel based distance half band) narrow band sims as they might not capture the required surface depth detail for the whitewater splashes.
– The strange part to some extent is that the ‘vel’ field is totally fine and captures the right kind of information deep into the inside part of the liquid of the primary sim. Almost feels like a bug. Not sure if it is.
Found 247 posts.
Search results Show results as topic list.
Technical Discussion » H17 Whitewater stepping issue.
- pclaes
- 257 posts
- Offline
Technical Discussion » H17 Whitewater stepping issue.
- pclaes
- 257 posts
- Offline
Hey Charles,
Did you ever figure this out?
I'm running into the same stepping issue with a very default sim (sphere with 1m diameter hitting water) in H17.5.173.
So far I found it does not seem to be the sourcing.
I am using narrow-band for the primary flip sim and I'm thinking it might be coming from the bands of the vdb of the volumes of the primary fluid sim.
Considering that the airborne particles are supposed to be ballistic and not affected by the fluid sim, it does not make sense as to where the stepping is coming from. I am going to dig into the h17.5 whitewatersolver some more.
Thanks,
Peter
Did you ever figure this out?
I'm running into the same stepping issue with a very default sim (sphere with 1m diameter hitting water) in H17.5.173.
So far I found it does not seem to be the sourcing.
I am using narrow-band for the primary flip sim and I'm thinking it might be coming from the bands of the vdb of the volumes of the primary fluid sim.
Considering that the airborne particles are supposed to be ballistic and not affected by the fluid sim, it does not make sense as to where the stepping is coming from. I am going to dig into the h17.5 whitewatersolver some more.
Thanks,
Peter
Houdini Indie and Apprentice » Curious about Houdini, not sure should I learn it though
- pclaes
- 257 posts
- Offline
just learn it. You will not regret it.
The amount of understanding you will gain will allow you to create/understand tools in any other package on a much deeper level. You will see Houdini more as a platform instead of as an application. You will learn computer graphics instead of shelf tools.
There is a learning curve, but for the stage you are at, where you are looking for something similar to ICE, you are either looking for Houdini's vops, or the new Bifrost in Maya. – But Houdini is much more integrated across different contexts.
The amount of understanding you will gain will allow you to create/understand tools in any other package on a much deeper level. You will see Houdini more as a platform instead of as an application. You will learn computer graphics instead of shelf tools.
There is a learning curve, but for the stage you are at, where you are looking for something similar to ICE, you are either looking for Houdini's vops, or the new Bifrost in Maya. – But Houdini is much more integrated across different contexts.
Houdini Engine for Maya » use instancer node checkbox
- pclaes
- 257 posts
- Offline
Hey,
thank you for the example, that does work. I will compare it to my otl and see what I am doing different and try to provide you with a modified otl.
thank you for the example, that does work. I will compare it to my otl and see what I am doing different and try to provide you with a modified otl.
Houdini Engine for Maya » use instancer node checkbox
- pclaes
- 257 posts
- Offline
Hi,
I've been trying to get the instancer to work in Maya using the “use instancer node” checkbox on the digital asset, but so far I've not been that succesfull at getting a meaningful result.
So I was hoping someone could put together a basic example of how the houdini->maya instancer works by using the ‘use instancer node’ checkbox.
Steps I've taken:
*) In Houdini I've created a subnet which contains a sphere obj, a box object and a grid object.
The sphere object is named: part_0
The box object is named: part_1
The grid object is named: points (and I have deleted all the primitives and just kept the points). I also added N, up, and built the transformation matrix so I could extract the rotation and added rotationPP attibute.
*) Inside this subnet I've also created an ‘instance’ object node. I am object merging in the points of the grid, created a body_part integer attribute on the points that evaluates to: $PT%2
And then I've built the instance string as well that uses this body_part attribute and forms instance strings like:
`opfullpath(“../../part_”+$BODY_PART)`
*) The controls for this instance object node are set to use ‘fast point instancing’.
*) Then I make a digital asset out of this subnet.
*) when loading that in Maya I can see my geometry (part_0, part_1, points, instancer1), as well as the nparticles, but the instance seems not to be connected to either the particles or the part_0, part_1 geometry.
— I can set this part up by hand or by script if required, but I would like to know if this is the expected behavior or if I am missing something.
– Also I want to use the instancer as I will be dealing with large amounts of data.
I was reading throught the input/output sticky here:
https://www.sidefx.com/index.php?option=com_forum&Itemid=172&page=viewtopic&t=32561 [sidefx.com]
specifically this section I would like to get working:
I am using Houdini 13.0.491 and Maya 2014 service pack 2
Any example of how to use the instancer between houdini->maya, using the ‘use instancer node’ checkbox would be greatly appreciated. – or getting the transform for the instances outputted would be interesting too.
Thank you!
I've been trying to get the instancer to work in Maya using the “use instancer node” checkbox on the digital asset, but so far I've not been that succesfull at getting a meaningful result.
So I was hoping someone could put together a basic example of how the houdini->maya instancer works by using the ‘use instancer node’ checkbox.
Steps I've taken:
*) In Houdini I've created a subnet which contains a sphere obj, a box object and a grid object.
The sphere object is named: part_0
The box object is named: part_1
The grid object is named: points (and I have deleted all the primitives and just kept the points). I also added N, up, and built the transformation matrix so I could extract the rotation and added rotationPP attibute.
*) Inside this subnet I've also created an ‘instance’ object node. I am object merging in the points of the grid, created a body_part integer attribute on the points that evaluates to: $PT%2
And then I've built the instance string as well that uses this body_part attribute and forms instance strings like:
`opfullpath(“../../part_”+$BODY_PART)`
*) The controls for this instance object node are set to use ‘fast point instancing’.
*) Then I make a digital asset out of this subnet.
*) when loading that in Maya I can see my geometry (part_0, part_1, points, instancer1), as well as the nparticles, but the instance seems not to be connected to either the particles or the part_0, part_1 geometry.
— I can set this part up by hand or by script if required, but I would like to know if this is the expected behavior or if I am missing something.
– Also I want to use the instancer as I will be dealing with large amounts of data.
I was reading throught the input/output sticky here:
https://www.sidefx.com/index.php?option=com_forum&Itemid=172&page=viewtopic&t=32561 [sidefx.com]
specifically this section I would like to get working:
instancer is supported. Instance geometry in Houdini can be outputted as Maya particle instancer or transform instances. The transform for the instances are outputted. It's possible to switch between the two by toggling “Use Instancer Node”, which is just under “Asset Option”.
I am using Houdini 13.0.491 and Maya 2014 service pack 2
Any example of how to use the instancer between houdini->maya, using the ‘use instancer node’ checkbox would be greatly appreciated. – or getting the transform for the instances outputted would be interesting too.
Thank you!
Houdini Lounge » Filling volume recursively
- pclaes
- 257 posts
- Offline
Perhaps a hint in terms of thinking about it differently:
How about starting with the filled up volume of points first, then having the outer points ‘snap’ to the closest inner points and doing that recursively until you have one point left. And then reversing the entire thing.
Depending on how complicated you want to make this, this is a kind of pathfinding algorithm.
The complexity will mostly come from the rules of splits and randomness that you want to use.
You should also search for ‘aggregation’ algorithms.
Doing it in pops is ok, but not necessary, personally I would do it in a sopsolver.
How about starting with the filled up volume of points first, then having the outer points ‘snap’ to the closest inner points and doing that recursively until you have one point left. And then reversing the entire thing.
Depending on how complicated you want to make this, this is a kind of pathfinding algorithm.
The complexity will mostly come from the rules of splits and randomness that you want to use.
You should also search for ‘aggregation’ algorithms.
Doing it in pops is ok, but not necessary, personally I would do it in a sopsolver.
Houdini Lounge » Hair braid
- pclaes
- 257 posts
- Offline
I wonder if you could braid it by using a wire solver and animated target shapes. So you would actually model the braid using dynamics and animation.
*) Or you could figure out what the actual math is for a braided curve,
*) or you could manually move the points of the curves
*) Or you could figure out what the actual math is for a braided curve,
*) or you could manually move the points of the curves
Houdini Lounge » PyroFX smoke bug
- pclaes
- 257 posts
- Offline
Looks like a tricky problem!
I will play around with it as well, but there are some smart guys in this thread already, so not sure if I will be able to solve that specific issue.
Sometimes a little bit of shader displacement (along the gradient, tracking with the rest position) might be enough to help hide the problem too - that's more of a fix than a real solution though.
Apart from that it does look like banding of some sort, I've had that happen when the container gets resampled and the resolutions between containers are not the same. It might be good to analyze the velocity in that region.
Also the confinement exagerates the eddies/rotational vortex movement. But if that is too strong it can cause density to collapse in on itself. (Kinda like a black hole). The pressure field should then work as a counterforce to prevent this from happening as that region would cause a zone of high pressure.
I would try to visualize the various fields in that area, specifically your velocity and pressure fields.
– it might be that the confinement is messing with the areas of low velocity.
– or it is a sampling issue. I know for some things only half the res of your fields is used. For example the density field for a resize-able container gets sampled at half res. There might be other areas within the solver where this is happening, but I don't know by heart.
– I would be curious to see if you could make it go away by resampling the field in a vdb and filtering it a bit.
If all else fails, send a pm to Jeff Lait (jlait) or support .
I will play around with it as well, but there are some smart guys in this thread already, so not sure if I will be able to solve that specific issue.
Sometimes a little bit of shader displacement (along the gradient, tracking with the rest position) might be enough to help hide the problem too - that's more of a fix than a real solution though.
Apart from that it does look like banding of some sort, I've had that happen when the container gets resampled and the resolutions between containers are not the same. It might be good to analyze the velocity in that region.
Also the confinement exagerates the eddies/rotational vortex movement. But if that is too strong it can cause density to collapse in on itself. (Kinda like a black hole). The pressure field should then work as a counterforce to prevent this from happening as that region would cause a zone of high pressure.
I would try to visualize the various fields in that area, specifically your velocity and pressure fields.
– it might be that the confinement is messing with the areas of low velocity.
– or it is a sampling issue. I know for some things only half the res of your fields is used. For example the density field for a resize-able container gets sampled at half res. There might be other areas within the solver where this is happening, but I don't know by heart.
– I would be curious to see if you could make it go away by resampling the field in a vdb and filtering it a bit.
If all else fails, send a pm to Jeff Lait (jlait) or support .
Houdini Lounge » Peter Claes showreel 2014
- pclaes
- 257 posts
- Offline
Hey,
I just made a new reel with some of the shots I've worked on over the past years:
New showreel for 2014:
https://vimeo.com/90880987 [vimeo.com]
All effects work is in Houdini, most stuff is rendered in mantra. Of course some of this is a team effort, so I definitely want to give a shout out to all the talented people that have advised me and that I have worked with over the past years. I've grown a lot and it is because of working and learning from some really talented people!
My H1B visa here in the US is in its final year, so I have to get some perspective again. It can be extended for another 3 years, but we'll see what happens next. At the moment I'm looking for options.
I might need to play with the encoding for Vimeo a bit as it seems to lose/blur some details, but I'll figure that out over the weekend.
Kind regards,
Peter
I just made a new reel with some of the shots I've worked on over the past years:
New showreel for 2014:
https://vimeo.com/90880987 [vimeo.com]
All effects work is in Houdini, most stuff is rendered in mantra. Of course some of this is a team effort, so I definitely want to give a shout out to all the talented people that have advised me and that I have worked with over the past years. I've grown a lot and it is because of working and learning from some really talented people!
My H1B visa here in the US is in its final year, so I have to get some perspective again. It can be extended for another 3 years, but we'll see what happens next. At the moment I'm looking for options.
I might need to play with the encoding for Vimeo a bit as it seems to lose/blur some details, but I'll figure that out over the weekend.
Kind regards,
Peter
Technical Discussion » Instancing Instances
- pclaes
- 257 posts
- Offline
jason_iversen
You can create nested instances by using nested packed primitives. This defines the geometry once and you can nest them like crazy.
Nice! Still on 12.5 here, so my info is outdated. I look forward to switching soon and making use of this awesome tech.
Technical Discussion » Instancing Instances
- pclaes
- 257 posts
- Offline
hey,
I'll reply here instead of vimeo.
short answer: you cannot instance instances. This is because the current rendertime procedurals do not support calling other procedurals.
So what can you do?
Depending on the requirements of your shot, you can either:
*) create a few trees (with leaves) and instance those around.
*) Or patches of trees if you are dealing with entire forests.
*) If you are dealing with entire forests, I highly recommend to make a level of detail tree patch, so the foreground trees load high res geo, the mid ground renders leaves as single face polygons and the background trees renders leaves as points/particles.
If you don't have that many trees and perhaps you need interaction with the leaves, you create an instance for each leaf and you could also create instances for the treetrunks.
Good luck!
I'll reply here instead of vimeo.
short answer: you cannot instance instances. This is because the current rendertime procedurals do not support calling other procedurals.
So what can you do?
Depending on the requirements of your shot, you can either:
*) create a few trees (with leaves) and instance those around.
*) Or patches of trees if you are dealing with entire forests.
*) If you are dealing with entire forests, I highly recommend to make a level of detail tree patch, so the foreground trees load high res geo, the mid ground renders leaves as single face polygons and the background trees renders leaves as points/particles.
If you don't have that many trees and perhaps you need interaction with the leaves, you create an instance for each leaf and you could also create instances for the treetrunks.
Good luck!
Houdini Lounge » Houdini quotes?
- pclaes
- 257 posts
- Offline
AdamTSkybarAdamT
Don't you worry, you'll be old too before you know it
“Oh god, dad thinks he's so cool/hip…it's so embarassing!”
So does a “.hip” as in houdini hip file actually mean someting -> Houdini I… P…? Otherwise it could have been a “.fun” file too?
In regards to the quote - it is tricky! - so here are some lame attempts
– perhaps some urban slang mix between “wassup and wasSOP” or
– “What's cookin'? (good lookin')”
– currently cooking: /dopnet1/life
– Latest production build: ..
– Happy `opdigits("/)` Birthday!
—- or perhaps a little geekier: “I am `opdigits(”.“)` old!”
– .saveAndIncrementFileName()
or more in line with your other quotes:
– Without fx it's just greenscreen
or more random: “converting particles to levelset”
“Fatal Error Avoided”
Technical Discussion » Most Efficient way to render a typical FX scene
- pclaes
- 257 posts
- Offline
JordanWalsh
Prefractured Geo (no changes in point count), Geo copy stamped onto a popnet (geo being created delete on the fly so changing point counts) and Volumetric geo from Pyro or something.
So Im thinking Delayed load shaders for all as it wont have to read the sim off disk, then write it to the ifd then re-read it into mantra for rendering.
If you are dealing with a lot of unique pieces I would recommend using the fast point instancer in combination with the “instancefile” attribute (path to geo on disk). This is quite fast for ifd gen. This is in a way similar to delayed load, but rather than delayed loading all the pieces at once, you load each piece when it is required.
– on a side note I would advice against the copy node for applying transforms, if you are dealing with a simple scenario like copying pieces. Instead look into the Transform Pieces otl – generally this is used in combination with the dopimport (as points), but can be used with your pops as well. I generally only use this for viewport visualization and use the fast instance setup for rendering.
Have a look inside the transform pieces otl, it has the makeinstancexform vop otl (which is a really nice example of how to combine all the various transform attributes together).
Your volumes should be delayed loaded - or you can use the instancefile with the fast instancer if you are clustering your volumes.
Houdini Lounge » Project Bifrost
- pclaes
- 257 posts
- Offline
cool stuff! More nodebased methodologies never hurt. Considering all the cool stuff that has come out of Softimage's ICE and that is very portable in concept to houdini, I look forward to what some of the more hardcore Maya users will do with this node-based interface.
I like Duncan's corner too. for years that guy has been using Maya in unconventional ways, pushing boundaries. Very inspirational! As a houdini user and ex-maya user I like to keep an eye on what else is going in and see if I can reverse engineer some of the features they have added to rebuild them as more customizable tools in houdini.
As always with those paint effects kind of elements it is very fast to get to 85% of the way there, and very hard to get the last 15% when custom changes are required. The c++ blackbox like implementation of these things does provide amazing speed for interactive viewport behaviour.
I did not see anything I thought was impossible to build in houdini, which is nice too.
I like Duncan's corner too. for years that guy has been using Maya in unconventional ways, pushing boundaries. Very inspirational! As a houdini user and ex-maya user I like to keep an eye on what else is going in and see if I can reverse engineer some of the features they have added to rebuild them as more customizable tools in houdini.
As always with those paint effects kind of elements it is very fast to get to 85% of the way there, and very hard to get the last 15% when custom changes are required. The c++ blackbox like implementation of these things does provide amazing speed for interactive viewport behaviour.
I did not see anything I thought was impossible to build in houdini, which is nice too.
Work in Progress » Destruction simulation
- pclaes
- 257 posts
- Offline
Once you get your more advanced model with more pieces you may want to read some of the info in this thread too:
http://forums.odforce.net/index.php?/topic/16830-bullet-or-rbd-for-fracturingcrumbling-effects/ [forums.odforce.net]
Good luck!
http://forums.odforce.net/index.php?/topic/16830-bullet-or-rbd-for-fracturingcrumbling-effects/ [forums.odforce.net]
Good luck!
Technical Discussion » Bullet solver and fieldForce
- pclaes
- 257 posts
- Offline
Hey JC,
here are two threads where I have also posted a bit of information in regards to bullet &rbd - there might be some useful info in there for you too:
http://www.sidefx.com/index.php?option=com_forum&Itemid=172&page=viewtopic&t=20909 [sidefx.com]
http://forums.odforce.net/index.php?/topic/16830-bullet-or-rbd-for-fracturingcrumbling-effects/ [forums.odforce.net]
here are two threads where I have also posted a bit of information in regards to bullet &rbd - there might be some useful info in there for you too:
http://www.sidefx.com/index.php?option=com_forum&Itemid=172&page=viewtopic&t=20909 [sidefx.com]
http://forums.odforce.net/index.php?/topic/16830-bullet-or-rbd-for-fracturingcrumbling-effects/ [forums.odforce.net]
Houdini Lounge » How to manually place points on a surface?
- pclaes
- 257 posts
- Offline
I tend to do this:
Project a nurbs grid onto the geo with a ray sop. Use a wrap-deform (clothdeform) to stick the grid to the surface.
Add a point with an add sop. Use a vopsop with a uv lookup so you can look up position by using a uv coordinate. (with primitive attribute vop).
I would not recommend this for a lot of points, but I have used it succesfully for making tears/drool run down a person's face.
Project a nurbs grid onto the geo with a ray sop. Use a wrap-deform (clothdeform) to stick the grid to the surface.
Add a point with an add sop. Use a vopsop with a uv lookup so you can look up position by using a uv coordinate. (with primitive attribute vop).
I would not recommend this for a lot of points, but I have used it succesfully for making tears/drool run down a person's face.
Technical Discussion » Bullet solver and fieldForce
- pclaes
- 257 posts
- Offline
Hey JC,
two things:
1) you should constrain your window pieces together with a “gluenetwork constraint” dop - ( in combination with the gluecluster sop). – this holds your window together.
2) You can pin constrain the window corners with traditional pin constraints. This makes sure your window does not drop out of the frame as a whole piece at the start of the simulation.
– Also you can automate that entire setup so if your geo changes it automatically selects the corners etc.
That works quite well and is what I did for this spot:
http://www.youtube.com/watch?v=686S_NcudLY [youtube.com]
Good luck !
two things:
1) you should constrain your window pieces together with a “gluenetwork constraint” dop - ( in combination with the gluecluster sop). – this holds your window together.
2) You can pin constrain the window corners with traditional pin constraints. This makes sure your window does not drop out of the frame as a whole piece at the start of the simulation.
– Also you can automate that entire setup so if your geo changes it automatically selects the corners etc.
That works quite well and is what I did for this spot:
http://www.youtube.com/watch?v=686S_NcudLY [youtube.com]
Good luck !
Houdini Lounge » Nuke creeping up on Houdini?
- pclaes
- 257 posts
- Offline
I would not worry about Naiad that much. As mentioned above, flip fluids is only one aspect.
What will have a much bigger impact is when openvdb is completely implemented in Houdini ( a lot of that should be in the next version, or you can already compile it right now): http://www.openvdb.org/ [openvdb.org]
Those volume tools are multi-functional: fluids, volumetric modeling and filtering, meshing, volumetric fracturing, fast io.
Besides that there are some other reasons which I have mentioned here:
http://forums.cgsociety.org/showpost.php?p=7241540&postcount=17 [forums.cgsociety.org]
Nuke already has particles, so basic dynamics and modeling tools will help for quick fixes or small additions…. and to be fair, I'd rather spend my time doing the complex challenging fx, instead of adding minor bits of debris. (Because a big part of destruction is controlling the shape of the pieces, triggering the glue constraints, adding secondary particles and debris… good luck to them trying to set all that up in Nuke… therefore it will most likely be restricted to small scale additional debris). Also think about how much more compositors will need to know. It won't just be about 2d anymore, the more 3d will implemented in Nuke, the better their understanding of 3d needs to be too.
Also I like using nuke for my fxcomps. It is nice to have a separation between packages. That way you can hand off the images to someone else along with the nuke script of the fxcomp if they chose to use it. Imagine handing a compositor your Houdini scene, they might get a little confused.
What will have a much bigger impact is when openvdb is completely implemented in Houdini ( a lot of that should be in the next version, or you can already compile it right now): http://www.openvdb.org/ [openvdb.org]
Those volume tools are multi-functional: fluids, volumetric modeling and filtering, meshing, volumetric fracturing, fast io.
Besides that there are some other reasons which I have mentioned here:
http://forums.cgsociety.org/showpost.php?p=7241540&postcount=17 [forums.cgsociety.org]
Nuke already has particles, so basic dynamics and modeling tools will help for quick fixes or small additions…. and to be fair, I'd rather spend my time doing the complex challenging fx, instead of adding minor bits of debris. (Because a big part of destruction is controlling the shape of the pieces, triggering the glue constraints, adding secondary particles and debris… good luck to them trying to set all that up in Nuke… therefore it will most likely be restricted to small scale additional debris). Also think about how much more compositors will need to know. It won't just be about 2d anymore, the more 3d will implemented in Nuke, the better their understanding of 3d needs to be too.
Also I like using nuke for my fxcomps. It is nice to have a separation between packages. That way you can hand off the images to someone else along with the nuke script of the fxcomp if they chose to use it. Imagine handing a compositor your Houdini scene, they might get a little confused.
Technical Discussion » Voronoi Fracture Geo Replacement
- pclaes
- 257 posts
- Offline
Or you can apply the transforms directly to the pieces. (Same thing happens when doing delayed load instancing). You need two things for this:
A) the pieces geometry (initial state of pieces) with a “piece” attribute (can be turned on to be kept from the voronoi sop - or create your own with the connectivity sop)
B) the points from the sim, coming from the dopimport (as points to represent objects)
1) Bring in the points with the dopimport. (-> B)
2) Attribprmote a “piece” attribute on your pieces geometry to points. (-> A)
3) use this piece attribute as a lookup in a vopsop to apply the orient, P and scale attributes coming from the dopimport (this gives you really fast transforms, much faster than the copy sop). (A to look up in B)
Now here is the cool bit: You can replace the pieces geometry (A) with whatever geo you want (C), as long as you make sure the piece attribute is transferred, because eventually that is what is used to look up the transformation attributes (in B). (C to look up in B, using the piece attribute from A).
The C geometry can be clusters of other geometry, as long as they all have the same “piece attribute” they will look up the same transformation from the points (B). This is useful to simulate with lower resolution convex geometry, but then create high resolution clusters of pieces.
In order for this to work, you need to learn how you can recreate the transformation behaviour the copy sop does. Use vops ( learn to apply orient, P and scale to build the transform that is then applied to the points), as it is the fastest way.
-> the main thing is to extract the rotations out of the orient quaternion. I generally go import(orient) -> quaternion to matrix3 -> matrix 3 to matrix 4 -> extract rotations -> build transform (also plug the P and scale into this).
Hope this makes sense.
A) the pieces geometry (initial state of pieces) with a “piece” attribute (can be turned on to be kept from the voronoi sop - or create your own with the connectivity sop)
B) the points from the sim, coming from the dopimport (as points to represent objects)
1) Bring in the points with the dopimport. (-> B)
2) Attribprmote a “piece” attribute on your pieces geometry to points. (-> A)
3) use this piece attribute as a lookup in a vopsop to apply the orient, P and scale attributes coming from the dopimport (this gives you really fast transforms, much faster than the copy sop). (A to look up in B)
Now here is the cool bit: You can replace the pieces geometry (A) with whatever geo you want (C), as long as you make sure the piece attribute is transferred, because eventually that is what is used to look up the transformation attributes (in B). (C to look up in B, using the piece attribute from A).
The C geometry can be clusters of other geometry, as long as they all have the same “piece attribute” they will look up the same transformation from the points (B). This is useful to simulate with lower resolution convex geometry, but then create high resolution clusters of pieces.
In order for this to work, you need to learn how you can recreate the transformation behaviour the copy sop does. Use vops ( learn to apply orient, P and scale to build the transform that is then applied to the points), as it is the fastest way.
-> the main thing is to extract the rotations out of the orient quaternion. I generally go import(orient) -> quaternion to matrix3 -> matrix 3 to matrix 4 -> extract rotations -> build transform (also plug the P and scale into this).
Hope this makes sense.
-
- Quick Links