It was brought to my attention (by tamte, good man) that I can very easily create an ad-hoc charge attribute, and multiply that by the positionforce or velforce values in vex.
Yep. Move along. Nothing to see here.
Found 31 posts.
Search results Show results as topic list.
Technical Discussion » particle charge / old Interact POP behavior?
- zachlewis
- 45 posts
- Offline
Technical Discussion » particle charge / old Interact POP behavior?
- zachlewis
- 45 posts
- Offline
Just curious if the concept of a particle charge has been done away with in the DOP paradigm. It's certainly missing from the POP Property DOP, and the Interact POP seems more concerned with avoidance than attraction, unless I'm missing something pretty fundamental. I can't seem to find any discussion about this here or on odforce.
Any thoughts?
Any thoughts?
Technical Discussion » Multiple cloud renders through one machine?
- zachlewis
- 45 posts
- Offline
Just to update my own post… I found a bunch of promising stuff to dig into in $HFS/houdini/scripts/hqueue
If we decide to go this route, I'll let you guys know how it goes.
If we decide to go this route, I'll let you guys know how it goes.
Technical Discussion » Multiple cloud renders through one machine?
- zachlewis
- 45 posts
- Offline
Hey guys,
We're experimenting with the EC2 cloud rendering via HQueue. Due to our security situation, we're behind a proxy, and we can only let one machine through the proxy to interface with Amazon. That much works great - we can render on the cloud from that single machine.
We're wondering, is it possible to use that single machine to manage multiple, concurrent renders on the cloud? We'd like our artists to be able to submit .hip files to this single open machine for cloud rendering, but we're not quite sure if this is possible.
- Is it even possible to use one machine to manage multiple renders from different .hip files on EC2?
- Is it possible to send off a render to the cloud from the command line, provided our ROPs are set up appropriately, and we've gone through the pre-flight process?
- Would there be issues with multiple hython instances trying to bind the progress http server to the same port? Is there another, non-web-based way to monitor progress?
- Is this the kind of thing that could be handled with a local HQueue server, managing multiple cloud renders?
Thanks!
We're experimenting with the EC2 cloud rendering via HQueue. Due to our security situation, we're behind a proxy, and we can only let one machine through the proxy to interface with Amazon. That much works great - we can render on the cloud from that single machine.
We're wondering, is it possible to use that single machine to manage multiple, concurrent renders on the cloud? We'd like our artists to be able to submit .hip files to this single open machine for cloud rendering, but we're not quite sure if this is possible.
- Is it even possible to use one machine to manage multiple renders from different .hip files on EC2?
- Is it possible to send off a render to the cloud from the command line, provided our ROPs are set up appropriately, and we've gone through the pre-flight process?
- Would there be issues with multiple hython instances trying to bind the progress http server to the same port? Is there another, non-web-based way to monitor progress?
- Is this the kind of thing that could be handled with a local HQueue server, managing multiple cloud renders?
Thanks!
Houdini Learning Materials » Ocean Tutorial
- zachlewis
- 45 posts
- Offline
Here's what might be happening, as far as I can tell.
The shelf tool defaults try to export your displacement map as a .pic file; but when actually rendering, Houdini Apprentice writes those files out as .picnc. Because the shader is looking for a displacement map that ends in a .pic, it doesn't find it. I think you'll find that if if you change the extension in the ocean_render / output file path from .pic to .picnc, the shader should be able to find the displacement map.
In my case, I had to actually write the displacement map out manually before adjusting the output file path to end in .picnc; if I tried to change it before writing, I'd get weird permissions issues. I'm not sure if this is the case for anyone else, though, because I'm doing some other python stuff in my environment.
The shelf tool defaults try to export your displacement map as a .pic file; but when actually rendering, Houdini Apprentice writes those files out as .picnc. Because the shader is looking for a displacement map that ends in a .pic, it doesn't find it. I think you'll find that if if you change the extension in the ocean_render / output file path from .pic to .picnc, the shader should be able to find the displacement map.
In my case, I had to actually write the displacement map out manually before adjusting the output file path to end in .picnc; if I tried to change it before writing, I'd get weird permissions issues. I'm not sure if this is the case for anyone else, though, because I'm doing some other python stuff in my environment.
Houdini Learning Materials » Ocean Tutorial
- zachlewis
- 45 posts
- Offline
I don't think ROPs create subdirectories by default - does your output path exist? Is it writeable by mantra? Maybe try directing the textures to your temp directory and see if that works…
*EDIT: I was MISTAKEN - the “Create Intermediate Directories” toggle is enabled on the renderable Ocean Evaluate created by the shelf tool.
*EDIT: I was MISTAKEN - the “Create Intermediate Directories” toggle is enabled on the renderable Ocean Evaluate created by the shelf tool.
Edited by - Feb. 12, 2014 21:59:14
Technical Discussion » Make Unreadable Digital Assets !
- zachlewis
- 45 posts
- Offline
Houdini Learning Materials » Help me understand the "w" component of a Vector4
- zachlewis
- 45 posts
- Offline
Okay, no biters… that's understandable, this is a little all-over-the-place. My impression is that “w” means different things in different contexts. It's probably not too fruitful to ask about GLSL in the “Learning Houdini” forum
As for the “w” parameter on an Add SOP… I don't know how I missed this since it's in the friggin' help card, which is the first place I check pretty much always, but that“w” parameter can be used as a “weight” if the points generated by the Add SOP are used to connect a Bezier or NURBS curve. Embarrassing.
As for angular velocity, I was confusing myself. There is a float vector named “w” generated by things like the Angular Velocity POP that represents… angular velocity. That makes much more sense to me than having something like angular velocity represented by a single float at the end of a vector4, unless that value was just being stored there, to be later used to rotate a vector that many radians or degrees about an axis to be determined later. I guess that would be a valid but kind of a strange thing to do.
So - in summary - the Add SOP creates a vector4, with the last component (P.w) only relevant in the context of weighting a point (a knot?) on non-polygonal curves; and the angular velocity representation I was referring to has nothing to do, really, with a P.w vector4 component, and everything to do with a standard vector3 named “w”.
And then, indeed, I could store orientation represented as a quaternion in a vector4 named “orient,” which is automatically applied, if present, to instanced and copied points, I think prior to any other explicit transformations. But if I'm not mistaken, the vector4 components of a quaternion only make sense in relation to each other; its closest analog to cartesian transformations is the matrix3 that represents the same rotation.
(Quaternions are weird and mystical, by the way - I love that Houdini lets me sling them around to the effect of being able to rotate an angle about an axis, without fully having to understand how three sets of imaginary numbers mutually affect each other - but I hope to acquire that understanding some day).
As for using vector4s to store homogeneous coordinates… well… there's still a lot I don't understand about projection spaces. But it seems less relevant to the use of Houdini, at least in terms of surface operations. Although I imagine an understanding of such things must be useful in terms of camera frustum culling, or projecting UVs, that type of thing.
In case anyone else is confused by homogeneous coordinates, etc, here are a few resources that I'm currently trying to digest that seem useful:
As for the “w” parameter on an Add SOP… I don't know how I missed this since it's in the friggin' help card, which is the first place I check pretty much always, but that“w” parameter can be used as a “weight” if the points generated by the Add SOP are used to connect a Bezier or NURBS curve. Embarrassing.
As for angular velocity, I was confusing myself. There is a float vector named “w” generated by things like the Angular Velocity POP that represents… angular velocity. That makes much more sense to me than having something like angular velocity represented by a single float at the end of a vector4, unless that value was just being stored there, to be later used to rotate a vector that many radians or degrees about an axis to be determined later. I guess that would be a valid but kind of a strange thing to do.
So - in summary - the Add SOP creates a vector4, with the last component (P.w) only relevant in the context of weighting a point (a knot?) on non-polygonal curves; and the angular velocity representation I was referring to has nothing to do, really, with a P.w vector4 component, and everything to do with a standard vector3 named “w”.
And then, indeed, I could store orientation represented as a quaternion in a vector4 named “orient,” which is automatically applied, if present, to instanced and copied points, I think prior to any other explicit transformations. But if I'm not mistaken, the vector4 components of a quaternion only make sense in relation to each other; its closest analog to cartesian transformations is the matrix3 that represents the same rotation.
(Quaternions are weird and mystical, by the way - I love that Houdini lets me sling them around to the effect of being able to rotate an angle about an axis, without fully having to understand how three sets of imaginary numbers mutually affect each other - but I hope to acquire that understanding some day).
As for using vector4s to store homogeneous coordinates… well… there's still a lot I don't understand about projection spaces. But it seems less relevant to the use of Houdini, at least in terms of surface operations. Although I imagine an understanding of such things must be useful in terms of camera frustum culling, or projecting UVs, that type of thing.
In case anyone else is confused by homogeneous coordinates, etc, here are a few resources that I'm currently trying to digest that seem useful:
- Wolfram Alpha linear transformations demo page / portal to other useful stuff [demonstrations.wolfram.com]
A thread from the OpenGL forums on homogeneity, NDC, camera clipping [opengl.org]
A short and sweet blog post by Andrew Harvey [andrewharvey4.wordpress.com]
And, of course, Wikipedia articles on Projective Space [en.wikipedia.org] and Homogeneous Coordinates [en.wikipedia.org], which are beyond me.
Hope this has been helpful to someone!
Houdini Learning Materials » Help me understand the "w" component of a Vector4
- zachlewis
- 45 posts
- Offline
…please!
I've been really struggling to understand what that whole “w” thing means, and I could really use some guidance. I've done a fair amount of internet-investigating, and I'm still having a little bit of trouble actually piecing it all together. So, if anyone has any resources or examples that can help me understand what's happening, I'd really appreciate it.
Please forgive the scattered nature of this post… there's a lot I'm not quite understanding.
Specifically, my cousin is trying to write an OpenGL raytracer, and he's having trouble understanding how raytrace a cube; the example given to him in class involves negative ones as the fourth term in a vector4, and it's confusing him. I offered to help, since Houdini has time and time again proven to be an invaluable tool for understanding all sorts of stuff, but… I'm kind of stuck here.
Here's what I've discovered:
- In Houdini, you can of course use a vector4 to arbitrarily hold data (eg, RGBA values). So that's cool.
- When talking about homogeneous coordinates (which is its own mindf*ck for me), the fourth component denotes whether a vector4 represents a point (1? not-zero?) or a vector (0); and we like using 4x4 matrices for our transformations because they can represent rotation, scale, translation, and shear all at once, in some order, yes?
- So, it should be possible to represent a cartesian point with homogenous coordinates . If “w” is zero, the vector is said to exist at “infinity” - which, for all intents and purposes, just means it represents a direction. And if “w” is less than one, the distance from the projection origin elongates; and shortens if w is greater than 1. Is that right? Thusfar, I haven't seen any introductory examples that use a value other than 1 or 0, but it's possible I haven't ventured far enough to see that kind of stuff. I understand that the “w” value must be either zero or one for use with a matrix4, right?
- That said, it seems that using a vector4 is sometimes nothing more than a convenient way to integrate matrix4s.
- Okay. And then we have the Add SOP, which lets you specify a W component between 0 and 1000 by default. I don't… fully understand what that's about, or how it can be used. There are a lot of examples for the Add SOP, but I didn't really see much that seemed to address this “W” term.
- And can't the “w” term be used to denote orientation or angular velocity? I feel like I've definitely seen that coming out of POPs. Is that just another use of a vector4 type to represent a quaternion, or… something else?
- So, back to my cousin's problem - what the hell does it mean if the “w” term in a vector4 is negative one? Does that just flip the handedness of the world or something? It seems like it would refer to coordinates behind the camera.
Any elucidation would be appreciated!
I've been really struggling to understand what that whole “w” thing means, and I could really use some guidance. I've done a fair amount of internet-investigating, and I'm still having a little bit of trouble actually piecing it all together. So, if anyone has any resources or examples that can help me understand what's happening, I'd really appreciate it.
Please forgive the scattered nature of this post… there's a lot I'm not quite understanding.
Specifically, my cousin is trying to write an OpenGL raytracer, and he's having trouble understanding how raytrace a cube; the example given to him in class involves negative ones as the fourth term in a vector4, and it's confusing him. I offered to help, since Houdini has time and time again proven to be an invaluable tool for understanding all sorts of stuff, but… I'm kind of stuck here.
Here's what I've discovered:
- In Houdini, you can of course use a vector4 to arbitrarily hold data (eg, RGBA values). So that's cool.
- When talking about homogeneous coordinates (which is its own mindf*ck for me), the fourth component denotes whether a vector4 represents a point (1? not-zero?) or a vector (0); and we like using 4x4 matrices for our transformations because they can represent rotation, scale, translation, and shear all at once, in some order, yes?
- So, it should be possible to represent a cartesian point with homogenous coordinates . If “w” is zero, the vector is said to exist at “infinity” - which, for all intents and purposes, just means it represents a direction. And if “w” is less than one, the distance from the projection origin elongates; and shortens if w is greater than 1. Is that right? Thusfar, I haven't seen any introductory examples that use a value other than 1 or 0, but it's possible I haven't ventured far enough to see that kind of stuff. I understand that the “w” value must be either zero or one for use with a matrix4, right?
- That said, it seems that using a vector4 is sometimes nothing more than a convenient way to integrate matrix4s.
- Okay. And then we have the Add SOP, which lets you specify a W component between 0 and 1000 by default. I don't… fully understand what that's about, or how it can be used. There are a lot of examples for the Add SOP, but I didn't really see much that seemed to address this “W” term.
- And can't the “w” term be used to denote orientation or angular velocity? I feel like I've definitely seen that coming out of POPs. Is that just another use of a vector4 type to represent a quaternion, or… something else?
- So, back to my cousin's problem - what the hell does it mean if the “w” term in a vector4 is negative one? Does that just flip the handedness of the world or something? It seems like it would refer to coordinates behind the camera.
Any elucidation would be appreciated!
Technical Discussion » Extending Splines and Calculating Curvature
- zachlewis
- 45 posts
- Offline
Decided to be a bit more specific about the curvature thing. There's something off with my algorithm.
Does anyone know how the Measure SOP measures curvature (of curves)? It seems to depend quite heavily on how the normals are oriented; whereas my method is only concerned with the relative position of the points.
Here's some point wrangle code:
vector prev = normalize(@P-point(@OpInput1,“P”,@ptnum-1));
vector next = normalize(point(@OpInput1,“P”,@ptnum+1)-@P);
f@mycurve = 9e99; //inf
if( (@ptnum != Npt-1) && (@ptnum != 0))
@mycurve = degrees( acos( dot( prev,next)));
The interesting thing is, depending on how I calculate normals for the curve (eg, various modes on with a PolyFrame SOP, or a custom Python node to use the Parallel Frame Transport method, or that awesome “Tangent” digital asset from the old asset exchange), the Measure SOP will give me different results, which isn't so surprising. But what is surprising is that my calculations are off by a consistent factor, if all my points lie on the same plane.
If I use a Measure SOP to calculate a “curvature” attribute prior to the above point wrangle, I can calculate the ratio between my algorithm and SESI's:
f@ratio = degrees(@curvature) / @mycurve;
And, depending on the normal orientation, with my current curve, it'll either consistently be a ratio of ~3.125 or ~6.25.
Of course, the story is totally different if my curve spans three dimensions. That ratio changes accordingly as the normal in the Y-direction deviates from 0.
I've attached an example file too. If for some reason, the embedded digital assets don't work, you can just set the “switch_normals_calc” sop to 0, and it'll use a plain old facet and a polyframe to calculate normals.
Short version: How does the Measure SOP calculate curvature of curves? What am I doing differently?
Does anyone know how the Measure SOP measures curvature (of curves)? It seems to depend quite heavily on how the normals are oriented; whereas my method is only concerned with the relative position of the points.
Here's some point wrangle code:
vector prev = normalize(@P-point(@OpInput1,“P”,@ptnum-1));
vector next = normalize(point(@OpInput1,“P”,@ptnum+1)-@P);
f@mycurve = 9e99; //inf
if( (@ptnum != Npt-1) && (@ptnum != 0))
@mycurve = degrees( acos( dot( prev,next)));
The interesting thing is, depending on how I calculate normals for the curve (eg, various modes on with a PolyFrame SOP, or a custom Python node to use the Parallel Frame Transport method, or that awesome “Tangent” digital asset from the old asset exchange), the Measure SOP will give me different results, which isn't so surprising. But what is surprising is that my calculations are off by a consistent factor, if all my points lie on the same plane.
If I use a Measure SOP to calculate a “curvature” attribute prior to the above point wrangle, I can calculate the ratio between my algorithm and SESI's:
f@ratio = degrees(@curvature) / @mycurve;
And, depending on the normal orientation, with my current curve, it'll either consistently be a ratio of ~3.125 or ~6.25.
Of course, the story is totally different if my curve spans three dimensions. That ratio changes accordingly as the normal in the Y-direction deviates from 0.
I've attached an example file too. If for some reason, the embedded digital assets don't work, you can just set the “switch_normals_calc” sop to 0, and it'll use a plain old facet and a polyframe to calculate normals.
Short version: How does the Measure SOP calculate curvature of curves? What am I doing differently?
Technical Discussion » Extending Splines and Calculating Curvature
- zachlewis
- 45 posts
- Offline
Hey guys,
So, I've been working on trying to implement parts of Cinema4D's “MoSpline” into Houdini - if you're not familiar with it, it's a nifty tool that, among other things, provides a very artist-friendly way of creating all sorts of lovely spliney spirally shapes. I'm not a Cinema4D guy at all, so it's hard for me to determine what's going on “under the hood,” but… it's been fun trying to figure out how to recreate a lot of those features in Houdini. And I'll have a bunch more questions about other parts of that in the not-too-distant future.
One of the very neat things about MoSpline is its ability to extend a spline past its endpoints, in either direction; and there are some controls, as I understand them, for the amount of “curve” (how much of the curvature to inherit in the extension), “spiral” (how much of additional spirally behavior should be inherited from the curvature), and “scale” (width-wise) that govern the behavior of the extended portion of the spline.
So, I'm trying to break this apart, and figure out how to best create this kind of control in Houdini. Would love any suggestions or pointers. Specifically, I'm having trouble figuring out how to stably calculate the curvature of the last segment of the spline.
Provided I have a curvy spline with a bunch (say, 100) of resampled segments, with each point oriented properly and consistently (parallel frame transport), I'm able offset, orient, and then use the stitch or join tools to connect another line with G2 continuity. That's cool. And then I can recursively “march” the points on the line through a loop that rotates each point relative to the previous point by a certain angle, as per… one of the examples that ships with Houdini, can't remember which, but it's a good one. Also neat. I get spirally spirals that continue for as many line segments as I have.
(I feel like there must be a way to use CHOPs to achieve a similar effect, possibly without having to use a second curve - I'd love to hear what anyone has to say in this regard, because I really need to work on my CHOPs chops)
Ideally, I'd like to be able to use the curvature of the last bit of my initial spline to dictate the angle for the extended bit. If I'm understanding, say, b-splines correctly, isn't the curvature of the endpoints intrinsically calculated? Is there a way to access that?
Otherwise, what I've been doing is normalizing the difference between the positions of the next and current points, and the current and previous points, and taking the acos of the dot product those (normalized) vectors. This works fine, for the most part, UNLESS the last two points are in different cartesian coordinate… err, quadrants, I guess you would call them - eg, if the sign of one of the position vector components is different between the two normalized vectors, it throws my whole acos(dot(v1,v2)) thing out of whack. So, is there a better way I could be calculating endpoint curvature?
*edit: i can't seem to recreate my own problem. Anyway, see below.
Thanks!
So, I've been working on trying to implement parts of Cinema4D's “MoSpline” into Houdini - if you're not familiar with it, it's a nifty tool that, among other things, provides a very artist-friendly way of creating all sorts of lovely spliney spirally shapes. I'm not a Cinema4D guy at all, so it's hard for me to determine what's going on “under the hood,” but… it's been fun trying to figure out how to recreate a lot of those features in Houdini. And I'll have a bunch more questions about other parts of that in the not-too-distant future.
One of the very neat things about MoSpline is its ability to extend a spline past its endpoints, in either direction; and there are some controls, as I understand them, for the amount of “curve” (how much of the curvature to inherit in the extension), “spiral” (how much of additional spirally behavior should be inherited from the curvature), and “scale” (width-wise) that govern the behavior of the extended portion of the spline.
So, I'm trying to break this apart, and figure out how to best create this kind of control in Houdini. Would love any suggestions or pointers. Specifically, I'm having trouble figuring out how to stably calculate the curvature of the last segment of the spline.
Provided I have a curvy spline with a bunch (say, 100) of resampled segments, with each point oriented properly and consistently (parallel frame transport), I'm able offset, orient, and then use the stitch or join tools to connect another line with G2 continuity. That's cool. And then I can recursively “march” the points on the line through a loop that rotates each point relative to the previous point by a certain angle, as per… one of the examples that ships with Houdini, can't remember which, but it's a good one. Also neat. I get spirally spirals that continue for as many line segments as I have.
(I feel like there must be a way to use CHOPs to achieve a similar effect, possibly without having to use a second curve - I'd love to hear what anyone has to say in this regard, because I really need to work on my CHOPs chops)
Ideally, I'd like to be able to use the curvature of the last bit of my initial spline to dictate the angle for the extended bit. If I'm understanding, say, b-splines correctly, isn't the curvature of the endpoints intrinsically calculated? Is there a way to access that?
Otherwise, what I've been doing is normalizing the difference between the positions of the next and current points, and the current and previous points, and taking the acos of the dot product those (normalized) vectors. This works fine, for the most part, UNLESS the last two points are in different cartesian coordinate… err, quadrants, I guess you would call them - eg, if the sign of one of the position vector components is different between the two normalized vectors, it throws my whole acos(dot(v1,v2)) thing out of whack. So, is there a better way I could be calculating endpoint curvature?
*edit: i can't seem to recreate my own problem. Anyway, see below.
Thanks!
Edited by - Sept. 21, 2013 23:14:36
Houdini Learning Materials » How to get more smoke with Pyro FX 2
- zachlewis
- 45 posts
- Offline
Off the top of my head, I know there's an FXPHD Destruction course - HOU205, I think - that does an excellent job with this kind of thing. I think it'll run you a hundred bucks or so, but I'd say it's worth it.
One way you could add those little debris rockets is to fracture some geometry in SOPs, art-direct their initial velocities (more on that in a sec), bring them in as fractured rbd objects (inheriting velocity from point attributes!), and then in a secondary sim (or the same sim, whatever you want), you can use the deforming debris geometry with the billowy smoke shelf tool… should work like a charm.
To art-direct those debris velocities, one way is to take your pre-fractured geometry, add an IsoOffset / Scatter SOP to get a nice random distribution of points throughout the volume, and plug the Scatter into the first input of a Point Wrangle SOP, and then additionally branch off an Edit SOP from your scatter SOP, plugging the Edit SOP output into the second input of that same Point Wrangle SOP.
The idea is to use the Edit to transform your scattered points in the direction you want your debris to go - eg, up and out a little - relative to where the initial static points are, in order to create an initial velocity for those points; and then we attribtransfer the point velocities over to the fractured geometry.
Because you have the same number of points going into both inputs of the Point Wrangle, you can subtract the position of @OpInput1's points from @OpInput2 (the edited points), and bind the output to velocity point attribute (if you're using an OpenGL, non-H11 viewport, there's a “Display Point Velocities” button on the right side of the viewport; otherwise, you'll have to press “D” on the viewport and set up a custom “v” attribute."
Anyway, your code for the Point Wrangle could look something like this:
@v = (point(@OpInput2,“P”,@ptnum) - point(@OpInput1,“P”,@ptnum));
@v *= 10;
Broken down:
- @ptnum refers to the current point number being iterated over, exactly like the $PT variable used in hscript expressions - the idea being, the Point Wrangle, just like VOPs in general, applies this same operation to all points in the input stream (unless otherwise limited to a specified group).
- Using the Point VEX function, you're looking up the “P” attribute of point number @ptnum from @OpInput2, the second input, and you're subtracting from that the “P” attribute of point number @ptnum from the first input.
- Since both point functions return a vector, it's a simple matter of vector subtraction - and you can bind this directly to the first input stream as the vector attribute “v.”
- Normally, if you want to create a new attribute, you need to cast the attribute as a certain type - vector, float, matrix, etc - while you're defining it. So, you could write “v@v = …” to create a vector attribute called “v” - but “v” happens to be a special-case pre-defined vector attribute (just like @ptnum is a pre-defined integer attribute), since SideFX predicted that these would be commonly-used attributes. See the Point Wrangle documentation for more information. Bottom line, we can use @v = … because it's intrinsically defined for us, but if we wanted to bind the output to an attribute named “foo,” we'd have to cast it as v@foo = … (because vector subtraction yields a vector output).
- I'm then scaling the entire operation by 10, which you'll have to season to taste, once you see how things behave in DOPs. The sexier thing to do would be to set up a spare float parameter on the Point Wrangle called, say, “scale,” and reference that in the code with “`ch(”scale“)`” - mind the backticks, which instruct the compiler to evaluate the expression first - and you'll get a nice slider to scale your velocities. Just set the default value to something other than zero. Or don't. It's your debris.
Right, so, coming out of this, you should have velocities assigned to the points you scattered over your pre-fractured geometry - and you can interactively direct these velocities with the Edit SOP (click on it in the node graph, press Enter in the viewport, and you'll have transform handles; and if you right-click on the selection tool in the viewport - the arrow - you can select “lasso selection”, which might be easier to use than the default “box selection”). This will become vital later on, when you're running your RBD sim and you want to fine-tune the direction each piece of debris launches off toward. That's the kind of control Houdini boasts over other packages.
But you're not quite done yet. Before you go into DOPs, you want to transfer the velocity point attribute “v” over to the original fractured geometry with an AttributeTransfer SOP. And *then* you should be good to go with the fractured RBD shelf tool. Again, make sure you select “Inherit Velocity from Point Velocity” in the RBD Fracture object.
In case you're wondering why we're going through the trouble of scattering over the pre-fractured geometry to set our point velocities, and then transferring the velocity attribute over to the fractured geometry, the reason is two-fold:
1) While we could have used placed the actual fractured geometry in primitive group selection mode to direct our initial velocities, even if we rotated the pieces, we still wouldn't get a whole lot of angular velocity, making for a duller simulation. Same goes for if we had just used the scatter node that serves to dictate the centroids of the voronoi fracture node - you'd only be getting a singular direction. By using a secondary scatter based on the original geometry, the scattered points naturally will *not* align perfectly to the fractured geo's topology, which means there's more room for variability for each point on each piece of fractured geo.
2) By divorcing the points used for setting up the velocity from the topology of the fractured geometry, you're free to change the number of fractured pieces generated by your voronoi fracture without being forced to go back and re-setup your velocities every time you make a change to how your fractures are set up; you're free to experiment with other fracturing techniques, or to do a lower-res sim for testing, or even reuse your directed velocities setup for large-piece and small-piece debris setups in the same sim… however you'd like.
I've included a little example. There are many things you could do to push it further - clustering, for instance, would help with the detail (you'll notice I disabled the “resize bounding box” entirely - playing with that would help too) - but this should get you started.
One way you could add those little debris rockets is to fracture some geometry in SOPs, art-direct their initial velocities (more on that in a sec), bring them in as fractured rbd objects (inheriting velocity from point attributes!), and then in a secondary sim (or the same sim, whatever you want), you can use the deforming debris geometry with the billowy smoke shelf tool… should work like a charm.
To art-direct those debris velocities, one way is to take your pre-fractured geometry, add an IsoOffset / Scatter SOP to get a nice random distribution of points throughout the volume, and plug the Scatter into the first input of a Point Wrangle SOP, and then additionally branch off an Edit SOP from your scatter SOP, plugging the Edit SOP output into the second input of that same Point Wrangle SOP.
The idea is to use the Edit to transform your scattered points in the direction you want your debris to go - eg, up and out a little - relative to where the initial static points are, in order to create an initial velocity for those points; and then we attribtransfer the point velocities over to the fractured geometry.
Because you have the same number of points going into both inputs of the Point Wrangle, you can subtract the position of @OpInput1's points from @OpInput2 (the edited points), and bind the output to velocity point attribute (if you're using an OpenGL, non-H11 viewport, there's a “Display Point Velocities” button on the right side of the viewport; otherwise, you'll have to press “D” on the viewport and set up a custom “v” attribute."
Anyway, your code for the Point Wrangle could look something like this:
@v = (point(@OpInput2,“P”,@ptnum) - point(@OpInput1,“P”,@ptnum));
@v *= 10;
Broken down:
- @ptnum refers to the current point number being iterated over, exactly like the $PT variable used in hscript expressions - the idea being, the Point Wrangle, just like VOPs in general, applies this same operation to all points in the input stream (unless otherwise limited to a specified group).
- Using the Point VEX function, you're looking up the “P” attribute of point number @ptnum from @OpInput2, the second input, and you're subtracting from that the “P” attribute of point number @ptnum from the first input.
- Since both point functions return a vector, it's a simple matter of vector subtraction - and you can bind this directly to the first input stream as the vector attribute “v.”
- Normally, if you want to create a new attribute, you need to cast the attribute as a certain type - vector, float, matrix, etc - while you're defining it. So, you could write “v@v = …” to create a vector attribute called “v” - but “v” happens to be a special-case pre-defined vector attribute (just like @ptnum is a pre-defined integer attribute), since SideFX predicted that these would be commonly-used attributes. See the Point Wrangle documentation for more information. Bottom line, we can use @v = … because it's intrinsically defined for us, but if we wanted to bind the output to an attribute named “foo,” we'd have to cast it as v@foo = … (because vector subtraction yields a vector output).
- I'm then scaling the entire operation by 10, which you'll have to season to taste, once you see how things behave in DOPs. The sexier thing to do would be to set up a spare float parameter on the Point Wrangle called, say, “scale,” and reference that in the code with “`ch(”scale“)`” - mind the backticks, which instruct the compiler to evaluate the expression first - and you'll get a nice slider to scale your velocities. Just set the default value to something other than zero. Or don't. It's your debris.
Right, so, coming out of this, you should have velocities assigned to the points you scattered over your pre-fractured geometry - and you can interactively direct these velocities with the Edit SOP (click on it in the node graph, press Enter in the viewport, and you'll have transform handles; and if you right-click on the selection tool in the viewport - the arrow - you can select “lasso selection”, which might be easier to use than the default “box selection”). This will become vital later on, when you're running your RBD sim and you want to fine-tune the direction each piece of debris launches off toward. That's the kind of control Houdini boasts over other packages.
But you're not quite done yet. Before you go into DOPs, you want to transfer the velocity point attribute “v” over to the original fractured geometry with an AttributeTransfer SOP. And *then* you should be good to go with the fractured RBD shelf tool. Again, make sure you select “Inherit Velocity from Point Velocity” in the RBD Fracture object.
In case you're wondering why we're going through the trouble of scattering over the pre-fractured geometry to set our point velocities, and then transferring the velocity attribute over to the fractured geometry, the reason is two-fold:
1) While we could have used placed the actual fractured geometry in primitive group selection mode to direct our initial velocities, even if we rotated the pieces, we still wouldn't get a whole lot of angular velocity, making for a duller simulation. Same goes for if we had just used the scatter node that serves to dictate the centroids of the voronoi fracture node - you'd only be getting a singular direction. By using a secondary scatter based on the original geometry, the scattered points naturally will *not* align perfectly to the fractured geo's topology, which means there's more room for variability for each point on each piece of fractured geo.
2) By divorcing the points used for setting up the velocity from the topology of the fractured geometry, you're free to change the number of fractured pieces generated by your voronoi fracture without being forced to go back and re-setup your velocities every time you make a change to how your fractures are set up; you're free to experiment with other fracturing techniques, or to do a lower-res sim for testing, or even reuse your directed velocities setup for large-piece and small-piece debris setups in the same sim… however you'd like.
I've included a little example. There are many things you could do to push it further - clustering, for instance, would help with the detail (you'll notice I disabled the “resize bounding box” entirely - playing with that would help too) - but this should get you started.
Houdini Learning Materials » SOP Solver problems
- zachlewis
- 45 posts
- Offline
Houdini Learning Materials » How to get more smoke with Pyro FX 2
- zachlewis
- 45 posts
- Offline
Looks nice.
One thing you could do is add some exploding rigid body geo - larger chunks of debris - that themselves are emitting smoke / dust. You'll end up with little “rockets” fanning out from the center (or however you direct their velocities), which, I think, is exactly what you're going for. Does that make sense?
One thing you could do is add some exploding rigid body geo - larger chunks of debris - that themselves are emitting smoke / dust. You'll end up with little “rockets” fanning out from the center (or however you direct their velocities), which, I think, is exactly what you're going for. Does that make sense?
Houdini Learning Materials » How to get more smoke with Pyro FX 2
- zachlewis
- 45 posts
- Offline
Looks nice.
One thing you could do is add some exploding rigid body geo - larger chunks of debris - that themselves are emitting smoke / dust. You'll end up with little “rockets” fanning out from the center (or however you direct their velocities), which, I think, is exactly what you're going for. Does that make sense?
One thing you could do is add some exploding rigid body geo - larger chunks of debris - that themselves are emitting smoke / dust. You'll end up with little “rockets” fanning out from the center (or however you direct their velocities), which, I think, is exactly what you're going for. Does that make sense?
Work in Progress » VDB Recursive Fracture
- zachlewis
- 45 posts
- Offline
Work in Progress » VDB Recursive Fracture
- zachlewis
- 45 posts
- Offline
Wow… John, this is really great. This inspired a whole new understanding of how the VDB Fracture node can be used! Damn clever.
I'm curious about the workflow… I imagine you can use a secondary volume to determine where to start fracturing, iterate for a bit, position the intersection volume again, fracture some more around that particular region, and so forth; or maybe it could even be used in conjunction with the scatter or voronoi fracture points sop to determine the “density” of iterations in certain regions.
What method are you using to select adjacent regions of the volume to take a bite out of next?
I'm gonna go mess around with some vdb fracturing right now.
I'm curious about the workflow… I imagine you can use a secondary volume to determine where to start fracturing, iterate for a bit, position the intersection volume again, fracture some more around that particular region, and so forth; or maybe it could even be used in conjunction with the scatter or voronoi fracture points sop to determine the “density” of iterations in certain regions.
What method are you using to select adjacent regions of the volume to take a bite out of next?
I'm gonna go mess around with some vdb fracturing right now.
Houdini Learning Materials » Snippets versus Inline Code
- zachlewis
- 45 posts
- Offline
We also ported most all related hscript expressions over to VEX as well. I use ch() vex functions all the time to put quick interfaces on top of wrangle operators.
Yes! When I saw that you could do this in Ari's Wrangle workshop video, it was really illuminating. One of my favorite things about Wrangles is that they fold really nicely into presets - they're so wonderfully self-contained. Because both the parameter interface and the values (eg, code) are saved into the gallery, they can almost serve as a poor-man's digital asset. I've found myself slowly building a diverse wrangle library almost by accident, just as a function of wanting to save my code for future reference, when I'm in a bind. Pun totally intended.
(my only gripe about the ch() workflow is that pasting relative references into the multi-line snippet field replaces all your code, like it does with other parameter types; but that's hardly a showstopper )
We just added a first-class method of accessing various inputs for processing volume data all inside the VOP network easily. No more op: for importing and processing volumes.
That's great news – the op: business is always a tad bit more of a mental exercise than I'd like it to be, when all I wanna do is make my volumes all noisy and pyroclasticy and stuff.
…
/obj/dopnet1 -> opadd popforce
…
It is for this very node that the Snippet VOP and wrangle workflow was created…
Oh boy. This is going to be a lot of fun to mess with. I guess the first step is to figure out what the hell kind of data its looking for, and move on from there! It looks like you guys have a lot of hidden pop-dop-dealies… real curious to hear what you guys have up your sleeves…
Anyway, thanks again for your clarifications and for steering me toward all sorts of stuff to play with!
Houdini Learning Materials » Snippets versus Inline Code
- zachlewis
- 45 posts
- Offline
Very cool. As always, I appreciate your insight, Jeff, that clarified things for me. I was more curious about the differences in the design choices behind, and practical applications of, Snippets vs Inline Code - but your summary and gentle reminder to just look at the generated code answered a lot of questions.
It seems that Snippets are designed to provide a convenient means for inlining light-weight VEX directly in higher-level contexts, while providing @OpInput* bindings that point directly to the various inputs of the wrangle node.
(And by light-weight, I mean, because the Code Snippet parameter is injected into the VFL code as its own function, you can't use a Wrangle directly to define other functions or include libraries (inline), like one would be able to with the Inline Code vop; but you CAN define your functions in an external library, and specify the headers in the Include Files parameter in the underlying Snippet vop, which causes the appropriate #include lines to be dynamically generated right above the Snippet's ad-hoc function definition in the injected VFL code. That's very slick.)
Another difference (unless I'm mistaken) is, Wrangles / Snippets are only concerned with returning attributes bound to the @OpInput1 stream, while the Inline Code node is more suited for directing its output to other VOPs. Is that fair to say?
So, I guess to answer my original question, the Inline Code vop is more suited for heavy lifting - I could define functions or structs and so forth here, should I be so bold; but variables have to be managed much more carefully. On the other hand, Snippets and Wrangles are quick-n-dirty methods for inlining code as more self-contained, one-off tricks. I guess you could say the difference is akin to sleight of hand versus setting up an entire gimmick.
I still don't totally understand how or where the OpInput* variables, or any of the other pre-defined attributes are declared. OpInput1, for instance, appears to be implicitly bound to “opinput:0”, according to the print function - not that I'd ever want to override that or anything. Are these just global variables that “come for free”, as defined internally by VOP context types themselves?
And - last question - POP Wrangles. I couldn't help but notice that they don't exist. It's easy enough to create a digital asset, promoting the multi-line string parameter from a Snippet, but I'm wondering if this isn't provided because it might not work as it does with other contexts? Are there particular limitations to using Snippets in VOP POPs?
Thanks for your time!
It seems that Snippets are designed to provide a convenient means for inlining light-weight VEX directly in higher-level contexts, while providing @OpInput* bindings that point directly to the various inputs of the wrangle node.
(And by light-weight, I mean, because the Code Snippet parameter is injected into the VFL code as its own function, you can't use a Wrangle directly to define other functions or include libraries (inline), like one would be able to with the Inline Code vop; but you CAN define your functions in an external library, and specify the headers in the Include Files parameter in the underlying Snippet vop, which causes the appropriate #include lines to be dynamically generated right above the Snippet's ad-hoc function definition in the injected VFL code. That's very slick.)
Another difference (unless I'm mistaken) is, Wrangles / Snippets are only concerned with returning attributes bound to the @OpInput1 stream, while the Inline Code node is more suited for directing its output to other VOPs. Is that fair to say?
So, I guess to answer my original question, the Inline Code vop is more suited for heavy lifting - I could define functions or structs and so forth here, should I be so bold; but variables have to be managed much more carefully. On the other hand, Snippets and Wrangles are quick-n-dirty methods for inlining code as more self-contained, one-off tricks. I guess you could say the difference is akin to sleight of hand versus setting up an entire gimmick.
I still don't totally understand how or where the OpInput* variables, or any of the other pre-defined attributes are declared. OpInput1, for instance, appears to be implicitly bound to “opinput:0”, according to the print function - not that I'd ever want to override that or anything. Are these just global variables that “come for free”, as defined internally by VOP context types themselves?
And - last question - POP Wrangles. I couldn't help but notice that they don't exist. It's easy enough to create a digital asset, promoting the multi-line string parameter from a Snippet, but I'm wondering if this isn't provided because it might not work as it does with other contexts? Are there particular limitations to using Snippets in VOP POPs?
Thanks for your time!
Houdini Learning Materials » Snippets versus Inline Code
- zachlewis
- 45 posts
- Offline
Hello!
What are the main differences between the Snippet and Inline Code VEX nodes, and when we should choose one over the other? Is it merely a matter of convenience, or are there functional differences?
(Apart from the whole expression expansion business - eg, using “$” for temp variables)
Also, sidebar question, is there any possible way to access and iterate over edge groups in VFL?
What are the main differences between the Snippet and Inline Code VEX nodes, and when we should choose one over the other? Is it merely a matter of convenience, or are there functional differences?
(Apart from the whole expression expansion business - eg, using “$” for temp variables)
Also, sidebar question, is there any possible way to access and iterate over edge groups in VFL?
-
- Quick Links