Found 61 posts.
Search results Show results as topic list.
Technical Discussion » Does the fur procedural really handle long styled guides?
- frankvw
- 61 posts
- Offline
It's a generic but observation about mantra rendering of the fursop. There is no hip. If you can't put down 3-4 nodes from the description, you really are never going to be able to analyse or answer the issue in the fist place. Even just opening the furball helpcards and putting a mountain vexsop on the guides is a ready made example. You get that? Where those examples all have a copysop making the guides from the skin, try making those guides curly with a mountain vexsop on NOT roots group. There is not one example with curly guides. It does it in the viewport though fine, if you set it up, just not in mantra.
Technical Discussion » Does the fur procedural really handle long styled guides?
- frankvw
- 61 posts
- Offline
Hi,
But you can't provide a simple sphere n curves example either?
I have something working out so far bug it involves pretty much gutting out the fursop and I am still getting some weird interpolations in certain circumstances, requiring some tweaking from character to character so not the simple procedural call with ifd or abc basic crowd character archives I was hoping.
The fursop does nice skin interpolation with straight lines copied to points on a single node. Fabiohair can handle styling well but the interpolation and scattering needs a bit more work for crowd stuff. Just hoped I could save a month or two of work but heyho. On the plus side, PBR on hair in mantra is looking fantastic on individual tests.
But you can't provide a simple sphere n curves example either?
I have something working out so far bug it involves pretty much gutting out the fursop and I am still getting some weird interpolations in certain circumstances, requiring some tweaking from character to character so not the simple procedural call with ifd or abc basic crowd character archives I was hoping.
The fursop does nice skin interpolation with straight lines copied to points on a single node. Fabiohair can handle styling well but the interpolation and scattering needs a bit more work for crowd stuff. Just hoped I could save a month or two of work but heyho. On the plus side, PBR on hair in mantra is looking fantastic on individual tests.
Technical Discussion » Does the fur procedural really handle long styled guides?
- frankvw
- 61 posts
- Offline
Hi,
No, it isn't at all. Its about mantra just providing a translation of the fursop preview as a render at a basic level. i.e., forget the grooming, that's already done, for any set of properly formatted curves and a skin, just produce the same result. The explanation was really clear, not sure what you mean to be honest. If you cant show a sphere with some shaped long curves rendering successfully, you cant answer the question.
I have in fact started to rework the node and I have got the render work, sort of, but its a bit hit and miss at the moment and like the Fabio hair system, I seem to require an awful lot of formatting, reformating and rest attributes for no obvious reason. A major rework and formatting of attributes that shouldn't be necessary. Even direct render of the preview curves with the density way up still doesn't interpolate. Funnily enough, putting on a width attribute on a three node same setup will render in 3delight and Arnold.
I'll figure it precisely, was just hoping someone had the answer on the finger tips but evidently no.
No, it isn't at all. Its about mantra just providing a translation of the fursop preview as a render at a basic level. i.e., forget the grooming, that's already done, for any set of properly formatted curves and a skin, just produce the same result. The explanation was really clear, not sure what you mean to be honest. If you cant show a sphere with some shaped long curves rendering successfully, you cant answer the question.
I have in fact started to rework the node and I have got the render work, sort of, but its a bit hit and miss at the moment and like the Fabio hair system, I seem to require an awful lot of formatting, reformating and rest attributes for no obvious reason. A major rework and formatting of attributes that shouldn't be necessary. Even direct render of the preview curves with the density way up still doesn't interpolate. Funnily enough, putting on a width attribute on a three node same setup will render in 3delight and Arnold.
I'll figure it precisely, was just hoping someone had the answer on the finger tips but evidently no.
Technical Discussion » Does the fur procedural really handle long styled guides?
- frankvw
- 61 posts
- Offline
Hi,
Yes, I' vs seen the Fabio system. Looks good and a well made otl. It is just as I said though - a quite large rework and additional tool layer on top of the fursop. Also, it looks like it requires a fair bit of tweaking on a per character Basis. That is not going to be manageable on dozens or hundreds of characters.
I need a simple, clean and procedural solution for picking out crowd agents close to camera and generating hair in a single node that is predictable and procedural. The single fur sop with supplied curves does this in the viewport/opengl but doesn't translate to mantra. The basis of any simple hair solution must be for a skin surface and a set of uniformly and cleanly formated poly or nurb curves, at the basic level. Even more so if you are exchanging or rendering alembic files or archives.
It really should not be any more involved in this instance than a furball example with some syled long curves at the root that renders like hair as an example in the docs but this doesn't exist which leads me to think I will need to make a hairsop from scratch or I could waste alot of time chasing a lost cause. That is all my original question is. Can you put three nodes together - a sphere, some long styled nurb or poly curves feeding into the skin and guides ports of the fursop and produce a mantra render from it that looks like hair?
Yes, I' vs seen the Fabio system. Looks good and a well made otl. It is just as I said though - a quite large rework and additional tool layer on top of the fursop. Also, it looks like it requires a fair bit of tweaking on a per character Basis. That is not going to be manageable on dozens or hundreds of characters.
I need a simple, clean and procedural solution for picking out crowd agents close to camera and generating hair in a single node that is predictable and procedural. The single fur sop with supplied curves does this in the viewport/opengl but doesn't translate to mantra. The basis of any simple hair solution must be for a skin surface and a set of uniformly and cleanly formated poly or nurb curves, at the basic level. Even more so if you are exchanging or rendering alembic files or archives.
It really should not be any more involved in this instance than a furball example with some syled long curves at the root that renders like hair as an example in the docs but this doesn't exist which leads me to think I will need to make a hairsop from scratch or I could waste alot of time chasing a lost cause. That is all my original question is. Can you put three nodes together - a sphere, some long styled nurb or poly curves feeding into the skin and guides ports of the fursop and produce a mantra render from it that looks like hair?
Technical Discussion » Does the fur procedural really handle long styled guides?
- frankvw
- 61 posts
- Offline
Hi,
I have my own guide grooming tools I usually plumb into vray or renderman procedurals, I've never used the mantra procedural for hair.
I have a project where I need to generate crowds and I want to make it self contained with cloth and hair all mantra rendered. I have reached an impasse now where my next step seems to be either rework the fur procedural or start something from scratch.
This is what I get. The guides on the screen shot are from the fur procedural when my fur hair groom tool is fed into the guides port with the skin in port one. I can slide density up and down and it all seems to work in the viewport/opengl. When I render in mantra I get the second image. No styling, just short straight fur. Increasing length gives me long straight fur.
What attribute or other am I missing between viewport previewing and mantra rendering?
The guides are standard nurb curves, 0-1 range, I also tried them as poly curves with tangent normals, same result. The guides are all rooted exactly at the skin vertices, curve prim numbers matching skin point numbers, fur procedural has no warnings and looks fine in viewport.
I've checked all the wire and fur help examples, basic and cvex type but always the same result. So now I am prepared for a pretty lengthy rework of the fursop but just wanted to check with anyone who uses it alot with hair to see if there is an obvious reason for this behavior. One thing I did notice is that none of the various examples or forum discussions I have seen ever really show the fur sop with detailed, styled and long hair guides on a head, more just long fur on a sphere. All the long hair samples I have seen have been the result of custom developed tools. So, before I bite the bullet, is this the case? Basic fur will not do long styled hair?
Thanks
I have my own guide grooming tools I usually plumb into vray or renderman procedurals, I've never used the mantra procedural for hair.
I have a project where I need to generate crowds and I want to make it self contained with cloth and hair all mantra rendered. I have reached an impasse now where my next step seems to be either rework the fur procedural or start something from scratch.
This is what I get. The guides on the screen shot are from the fur procedural when my fur hair groom tool is fed into the guides port with the skin in port one. I can slide density up and down and it all seems to work in the viewport/opengl. When I render in mantra I get the second image. No styling, just short straight fur. Increasing length gives me long straight fur.
What attribute or other am I missing between viewport previewing and mantra rendering?
The guides are standard nurb curves, 0-1 range, I also tried them as poly curves with tangent normals, same result. The guides are all rooted exactly at the skin vertices, curve prim numbers matching skin point numbers, fur procedural has no warnings and looks fine in viewport.
I've checked all the wire and fur help examples, basic and cvex type but always the same result. So now I am prepared for a pretty lengthy rework of the fursop but just wanted to check with anyone who uses it alot with hair to see if there is an obvious reason for this behavior. One thing I did notice is that none of the various examples or forum discussions I have seen ever really show the fur sop with detailed, styled and long hair guides on a head, more just long fur on a sphere. All the long hair samples I have seen have been the result of custom developed tools. So, before I bite the bullet, is this the case? Basic fur will not do long styled hair?
Thanks
Technical Discussion » No longer the impulse $NPTS birthing available in dop pop?
- frankvw
- 61 posts
- Offline
Hi,
Was trying to update an old pop project to dop pop context but pop source in dops only seems to give you random birthing, not one point per particle impulse. So its either keep it in old pop network or rebuild a rbd point object with pop forces. Pain in the **** for reversioning dozens of old shots on a new job this way.
Am I missing something?
Thx
Was trying to update an old pop project to dop pop context but pop source in dops only seems to give you random birthing, not one point per particle impulse. So its either keep it in old pop network or rebuild a rbd point object with pop forces. Pain in the **** for reversioning dozens of old shots on a new job this way.
Am I missing something?
Thx
Technical Discussion » How to save a node network as py script?
- frankvw
- 61 posts
- Offline
pezetko
http://www.sidefx.com/docs/houdini14.0/hom/hou/Node#asCode [sidefx.com]
EDIT***
Actually, that is helpful. Thanks. I was confusing .node and .Node classes.
Hi,
Cheers. It kinda helps combined with the connections switch but still requires a whole host of other overhead to setup soemthing workable. Pretty much what I have been doing so far.
If there isn't a network, “HDA” style option that works in a more global manner on a whole network, that would be what I am ideaaly seeking. Exporting out python or c++ style scripts from unity and maya is a straightforward, defined routine. I was thinking it would be the same in the hou interface but maybe not.
Thx
Technical Discussion » How to save a node network as py script?
- frankvw
- 61 posts
- Offline
Hi,
I have large networks that I want to automatically record as python code so I can run it in a python script later on and recreate the network, varying parm values procedurally. I don't want to make otl's, hda's. I want to do it like you would in cmd file. From ready made networks so I don't have to echo and recreate networks by hand again to record the building of the network.
I can loop through and grab parm values and node names and kinda do it in a top town way but its time consuming and needs editing for every different network. Is there a ready made .hou method that will do just that, grab a selected node network and make a recreatableb copy as python code/commands in a py module?
Thx
I have large networks that I want to automatically record as python code so I can run it in a python script later on and recreate the network, varying parm values procedurally. I don't want to make otl's, hda's. I want to do it like you would in cmd file. From ready made networks so I don't have to echo and recreate networks by hand again to record the building of the network.
I can loop through and grab parm values and node names and kinda do it in a top town way but its time consuming and needs editing for every different network. Is there a ready made .hou method that will do just that, grab a selected node network and make a recreatableb copy as python code/commands in a py module?
Thx
Houdini Lounge » how to calculate rotation vector values - self-learning
- frankvw
- 61 posts
- Offline
hi!
Thanks for that, I see how that reorient helps out for this. I've been looking into quaternions and matirices in VOPS as well. And been reading a book on vector math, something I thought I would never do again at high school! I've actually been using houdini for years but my background has been entirely fine art and I've always been able to achieve most things I ever need just with basic nodes - a ray of hope there to any artist out there who thinks houdini is just for math gurus! After a while you just kinda get more and more curious about what happens under the bonnet and houdini makes it easy to dip a toe deeper if you want. I bought the chris maynard vex things as well, they are really helpfull also.
cheers!!!!
Thanks for that, I see how that reorient helps out for this. I've been looking into quaternions and matirices in VOPS as well. And been reading a book on vector math, something I thought I would never do again at high school! I've actually been using houdini for years but my background has been entirely fine art and I've always been able to achieve most things I ever need just with basic nodes - a ray of hope there to any artist out there who thinks houdini is just for math gurus! After a while you just kinda get more and more curious about what happens under the bonnet and houdini makes it easy to dip a toe deeper if you want. I bought the chris maynard vex things as well, they are really helpfull also.
cheers!!!!
Houdini Lounge » how to calculate rotation vector values - self-learning
- frankvw
- 61 posts
- Offline
hi again,
Hummm … getting closer. I looked into the vangle expression function, plugged that into an attribute node, then put it into a transform rot parms. I thought it returned radians but values pretty much return something that looks nearly right as degree values, pretty close except it kinda wobbles a bit like ‘gimble’ thing here and there,
making a vangle between the lineSOP rotation vector and vector3(1,0,0), vector3(0,1,0), vector3(0,0,1) xyz axis each in turn.
The exhelp says ;
vangle(a, b) will return the same result as acos ( dot (normalize(a),
normalize(b)) ). It will not produce a negative result because the dot
product is symmetric, and does not take the order of a and b into
consideration.
You can define a turning order with the left hand rule or something
similar.
Try the following expression to get a signed result:
sign(dot(cross(cross(a,b),b),a)) * vangle(a,b)
Maybe explains the little difference. Have to look into defining the left hand rule thingy there. The “sign(dot(cross(cross(a,b),b),a)) * vangle(a,b)” didn't work out as well as the vangle here. Of course, origin or vrorigin in a pointSOP work perfectly, but that is doing all the vector math for you, so not much of a learning experience that way.
Looks like Chris Maynard at CMIVFX made some DVD's covering vector math in houdini some time ago, have to get the credit card out I guess and see if that illuminates any.
Hummm … getting closer. I looked into the vangle expression function, plugged that into an attribute node, then put it into a transform rot parms. I thought it returned radians but values pretty much return something that looks nearly right as degree values, pretty close except it kinda wobbles a bit like ‘gimble’ thing here and there,
making a vangle between the lineSOP rotation vector and vector3(1,0,0), vector3(0,1,0), vector3(0,0,1) xyz axis each in turn.
The exhelp says ;
vangle(a, b) will return the same result as acos ( dot (normalize(a),
normalize(b)) ). It will not produce a negative result because the dot
product is symmetric, and does not take the order of a and b into
consideration.
You can define a turning order with the left hand rule or something
similar.
Try the following expression to get a signed result:
sign(dot(cross(cross(a,b),b),a)) * vangle(a,b)
Maybe explains the little difference. Have to look into defining the left hand rule thingy there. The “sign(dot(cross(cross(a,b),b),a)) * vangle(a,b)” didn't work out as well as the vangle here. Of course, origin or vrorigin in a pointSOP work perfectly, but that is doing all the vector math for you, so not much of a learning experience that way.
Looks like Chris Maynard at CMIVFX made some DVD's covering vector math in houdini some time ago, have to get the credit card out I guess and see if that illuminates any.
Houdini Lounge » how to calculate rotation vector values - self-learning
- frankvw
- 61 posts
- Offline
So here is an hipnc that has a DOP animated box, then the original static box with a rotation vector made with a lineSOP, from the box centroid to one of the edges. You can see the vector line staying in step perfectly with the edge to show the rotation - the copSop is just used for illustration purposes.
What I am trying to better understand is how I can take that rotation vector and apply it to the static box so the DOP animation and extracted transforms onto the static box will perfectly match.
I'd like to try and work out a way to extract the transform matrices in VOPs (lookAT VOP? that seems to do this but couldn't get it to workout) and also via some attribute vex trig functions/expressions that I could then plug into a transform paramaters.
Maybe a sidefx developer could cast an eye? I think its a useful example exercise for artists trying to read up on vector math and to get a better feeling for practical application of vector math in houdini.
thanks again!!!
What I am trying to better understand is how I can take that rotation vector and apply it to the static box so the DOP animation and extracted transforms onto the static box will perfectly match.
I'd like to try and work out a way to extract the transform matrices in VOPs (lookAT VOP? that seems to do this but couldn't get it to workout) and also via some attribute vex trig functions/expressions that I could then plug into a transform paramaters.
Maybe a sidefx developer could cast an eye? I think its a useful example exercise for artists trying to read up on vector math and to get a better feeling for practical application of vector math in houdini.
thanks again!!!
Houdini Lounge » how to calculate rotation vector values - self-learning
- frankvw
- 61 posts
- Offline
hi,
This is probably a really easy question for some folk, but I would really appreciate any illumination in my attempts to get more math savy.
I want to work out how the calculate rotation values for an object via points, ideally in VOPS, just out of interest.
I drop a box in DOPS onto a groundplane and it falls, rotates around a bit, then settles. I want to go back to SOP level, make a point at the original object centroid, then add attributes to that new point that describe xyz position and rotation animation, so that when I copy a box to the point, it exactly matches the DOP animation.
Of course, DOP gives you all that information, or I could parent it, but I want to extract the transforms from the cached animation just as an exercise to get a grip on things like vector calculations, polyframe, matrices, etc.,
So, I have my point at the centroid, I can reference any point on the cube corners to produce a vector (two points) that I can see visually rotationing in the viewer with an addSop, but how do you go about then calculating a rotational angle from that, which could be applied to any object to match the cached animation? Would a lookAt node in VOP between the two workout? Or maybe bit of trig functions in an attribute field, reference one point from the other?
I know this is really easy for some folk, please give my ‘journey’ in vector math a boost
thank you!!!!!!!
This is probably a really easy question for some folk, but I would really appreciate any illumination in my attempts to get more math savy.
I want to work out how the calculate rotation values for an object via points, ideally in VOPS, just out of interest.
I drop a box in DOPS onto a groundplane and it falls, rotates around a bit, then settles. I want to go back to SOP level, make a point at the original object centroid, then add attributes to that new point that describe xyz position and rotation animation, so that when I copy a box to the point, it exactly matches the DOP animation.
Of course, DOP gives you all that information, or I could parent it, but I want to extract the transforms from the cached animation just as an exercise to get a grip on things like vector calculations, polyframe, matrices, etc.,
So, I have my point at the centroid, I can reference any point on the cube corners to produce a vector (two points) that I can see visually rotationing in the viewer with an addSop, but how do you go about then calculating a rotational angle from that, which could be applied to any object to match the cached animation? Would a lookAt node in VOP between the two workout? Or maybe bit of trig functions in an attribute field, reference one point from the other?
I know this is really easy for some folk, please give my ‘journey’ in vector math a boost
thank you!!!!!!!
Technical Discussion » ForEach by number and range - it's not working out for me
- frankvw
- 61 posts
- Offline
hi,
I have a problem and I wonder if a ForEach guru can nudge me in the right direction.
I am hoping to procedurally rig some eels with a wire deformer, so I need to generate a curve down the center of some low poly tube like models.
To get the concept, I've made up in my .hip a quick skinned tube from profiles copied to a curve, the plan being to drop into a ForEach loop by number method in order to extract the number of points per each profile, add a centroid point, delete the rest, loop through to the next 50 points, and so on, to be left with a set of points I can turn back into a curve running down the center of the skinned tube.
First time I have tried the ‘by number’ method and it doesn't work out for me. I can add a color ramp with UV's, partition group by color, then ForEach by group and it will work for the contents of the ForEach. (related question, anyone know a good method of creating groupps by proximity?). But I really want to get my head around iterating by a number and range of points.
In the .hip I left the first delete value hard wired with values and I get a point where I want. Same if I move it to 50-99 range or 100-149 range, and so on. (There are 1299 points, 50 per profile, 26 profiles). When I try the ForEach stamp values in the other node, they don't work as I thought.
So where do I pickup my iterations in the number method? (group, attribute, point or prim method all seem to pickup the stamp value automatically, no fuss). The stamp is not isolating 50 points, then moving on, as I would expect.
Also, in the range value at sop level for the node, the range end doesn't seem to want to accept incoming attribute values or point expressions for $NPT.
Sorry if I am missing something really obvious but much appreciated if you can point me the right way.
thankyou!
I have a problem and I wonder if a ForEach guru can nudge me in the right direction.
I am hoping to procedurally rig some eels with a wire deformer, so I need to generate a curve down the center of some low poly tube like models.
To get the concept, I've made up in my .hip a quick skinned tube from profiles copied to a curve, the plan being to drop into a ForEach loop by number method in order to extract the number of points per each profile, add a centroid point, delete the rest, loop through to the next 50 points, and so on, to be left with a set of points I can turn back into a curve running down the center of the skinned tube.
First time I have tried the ‘by number’ method and it doesn't work out for me. I can add a color ramp with UV's, partition group by color, then ForEach by group and it will work for the contents of the ForEach. (related question, anyone know a good method of creating groupps by proximity?). But I really want to get my head around iterating by a number and range of points.
In the .hip I left the first delete value hard wired with values and I get a point where I want. Same if I move it to 50-99 range or 100-149 range, and so on. (There are 1299 points, 50 per profile, 26 profiles). When I try the ForEach stamp values in the other node, they don't work as I thought.
So where do I pickup my iterations in the number method? (group, attribute, point or prim method all seem to pickup the stamp value automatically, no fuss). The stamp is not isolating 50 points, then moving on, as I would expect.
Also, in the range value at sop level for the node, the range end doesn't seem to want to accept incoming attribute values or point expressions for $NPT.
Sorry if I am missing something really obvious but much appreciated if you can point me the right way.
thankyou!
Technical Discussion » shaping noise in Pyro Fire flames
- frankvw
- 61 posts
- Offline
Hi,
I use the pyro fire alot but I sometimes get a brief to match say to element that has been done elsewhere in fume or real element. The only workable solution I find is to use a basic shelf flames setup, advect some parts with the pyro fluid, then apply some vop based noise ontop of the parts, with a metaballs, to create a nice tendril/noise pattern, then subtract from the original fluid volume as SDF.
Pyro solver just seems really unresponsive to shaping tendril and noise patterns via emission or noise in the solver / microsolver. It always try to converge to a solid mass without really convoluted apporaches like above.
Then I saw this test, apparently done in pyro ;
http://www.youtube.com/watch?v=wlkslb3lqrA [youtube.com]
So, I have fresh optimism to tackle detailing again within the solver network. Just wondering if any particular area is worth focusing in my research. Like I say, I have never found much useful results myself within the noise/emission areas of the solver or even jumping into the solver to add some noise functionaility at the base. Would be really useful if the docs provided a more useful fire example like this - all the existing fire solver examples docs and shelf autos have a ‘candle flame’ feel about them.
cheers!
I use the pyro fire alot but I sometimes get a brief to match say to element that has been done elsewhere in fume or real element. The only workable solution I find is to use a basic shelf flames setup, advect some parts with the pyro fluid, then apply some vop based noise ontop of the parts, with a metaballs, to create a nice tendril/noise pattern, then subtract from the original fluid volume as SDF.
Pyro solver just seems really unresponsive to shaping tendril and noise patterns via emission or noise in the solver / microsolver. It always try to converge to a solid mass without really convoluted apporaches like above.
Then I saw this test, apparently done in pyro ;
http://www.youtube.com/watch?v=wlkslb3lqrA [youtube.com]
So, I have fresh optimism to tackle detailing again within the solver network. Just wondering if any particular area is worth focusing in my research. Like I say, I have never found much useful results myself within the noise/emission areas of the solver or even jumping into the solver to add some noise functionaility at the base. Would be really useful if the docs provided a more useful fire example like this - all the existing fire solver examples docs and shelf autos have a ‘candle flame’ feel about them.
cheers!
Technical Discussion » PointsPolygons spec in Renderman archiveSOP
- frankvw
- 61 posts
- Offline
Hi,
I am writing out archives for use in feathers post-processing routine. I need to retain planar polys with normals in a simple archive file according to the RiPointsPolygons spec. However, no matter how I write out archive files, I always get RiPointsGeneralPolygons, even though the poly sets are definitely of the poly planar variety in houdini/bgeo land. Its a bit frustrating and creates a fair degree of preprocessing overhead. Am I missing something here in the ArchiveSOP? Any help much appreciated.
Thx
I am writing out archives for use in feathers post-processing routine. I need to retain planar polys with normals in a simple archive file according to the RiPointsPolygons spec. However, no matter how I write out archive files, I always get RiPointsGeneralPolygons, even though the poly sets are definitely of the poly planar variety in houdini/bgeo land. Its a bit frustrating and creates a fair degree of preprocessing overhead. Am I missing something here in the ArchiveSOP? Any help much appreciated.
Thx
Technical Discussion » looking for way to make blendshapes with variable points
- frankvw
- 61 posts
- Offline
Hi,
Is there a way to encapsulate a bgeo sequence (or maybe point clouds exported??) into a ‘consolidated’ or ‘baked’ entity? So normally I would go for splitting out shift chops and applying an expression onto my point instances,- like you would for crowd character offsetting, but … for this time I want to shift timing on hundreds of distant impact hits of pre-developed particle-surface bgeos. So the point counts will vary.
Now, its ok to offset my timings precisely with the bgeos but then I face either some very long expressions to manipulate bgeo frame numberings (that isn't good for testing, interaction, etc.,), or I need to calculate on-the-fly file_input node creation for each instance or copy, which gives me good interaction but huge node networks and precalculate/recook whole thing everytime I make changes.
If I didn't have the varying points of the surface, and I piped it into a single blendshape, this would work perfectly. So, anyone ever wrestled with a similar idea? y'know, getting a sequence of varying bgeos into chops, to break the link to reading $frames off of disk? Kinda non-linear time way, working off a ‘cooked’ node but not like the $F based timeoffset node.
Cheers
Is there a way to encapsulate a bgeo sequence (or maybe point clouds exported??) into a ‘consolidated’ or ‘baked’ entity? So normally I would go for splitting out shift chops and applying an expression onto my point instances,- like you would for crowd character offsetting, but … for this time I want to shift timing on hundreds of distant impact hits of pre-developed particle-surface bgeos. So the point counts will vary.
Now, its ok to offset my timings precisely with the bgeos but then I face either some very long expressions to manipulate bgeo frame numberings (that isn't good for testing, interaction, etc.,), or I need to calculate on-the-fly file_input node creation for each instance or copy, which gives me good interaction but huge node networks and precalculate/recook whole thing everytime I make changes.
If I didn't have the varying points of the surface, and I piped it into a single blendshape, this would work perfectly. So, anyone ever wrestled with a similar idea? y'know, getting a sequence of varying bgeos into chops, to break the link to reading $frames off of disk? Kinda non-linear time way, working off a ‘cooked’ node but not like the $F based timeoffset node.
Cheers
Technical Discussion » $HITTIME, stamp() and Copying
- frankvw
- 61 posts
- Offline
ok, so I can get a solution by adding a ForEach after my copySOP and referencing the attribute in there. It does seem like I am missing something about the copySOP here- that you only have access just to the built in attribute variables within the stamp section, but that you can't mix a stamp() function for the incoming ‘copy’ geometry with the ‘template point attributes’ of the incoming points. Even though they are there when you click on ‘template point attributes’. And the stamp() function placed upstream doesn't seem to like a point() expression to reference the variable either. Hummmm..
Technical Discussion » $HITTIME, stamp() and Copying
- frankvw
- 61 posts
- Offline
hi,
Thanks! Well… almost. The $TRUE, $FALSE values error out for me but if I substitute as :
if($HITTIME >= $T, $T, ((24*$HITTIME)+1))
I get either the hittime frame value stored or a negative value if not, so that works out fine.
Now the issue, how do I pickup such an attributeon the copy stamp in order to stamp()? Its definitely there in the spreadsheet when I turn on point template attributes but doesn't seem available as a stamp variable.
cheers
Thanks! Well… almost. The $TRUE, $FALSE values error out for me but if I substitute as :
if($HITTIME >= $T, $T, ((24*$HITTIME)+1))
I get either the hittime frame value stored or a negative value if not, so that works out fine.
Now the issue, how do I pickup such an attributeon the copy stamp in order to stamp()? Its definitely there in the spreadsheet when I turn on point template attributes but doesn't seem available as a stamp variable.
cheers
Technical Discussion » $HITTIME, stamp() and Copying
- frankvw
- 61 posts
- Offline
Hi,
I have hit a brickwall, wondering if anyone has an expression hinty hint.
I want to automate is the copying of some geo sequences with the copySOP via collision events, so I am adding to the points the $HITTIME and $HITPOS attributes from the collisionPOP. I can see my attributes for hittime on the copySOP but this attribute is a true or false value that lasts just for the collision frame. I tried setting the collision event to =$F but again, I get the collision frame number but it still disappears.
So how can you get the hittime event (via expression?) to remain static as the collision frame number/attribute value? And then, how would you pick up the attribute per $ID in the copySOP as a variable value for use upstream in a stamp function?
Any help much appreciated,
thank you.
I have hit a brickwall, wondering if anyone has an expression hinty hint.
I want to automate is the copying of some geo sequences with the copySOP via collision events, so I am adding to the points the $HITTIME and $HITPOS attributes from the collisionPOP. I can see my attributes for hittime on the copySOP but this attribute is a true or false value that lasts just for the collision frame. I tried setting the collision event to =$F but again, I get the collision frame number but it still disappears.
So how can you get the hittime event (via expression?) to remain static as the collision frame number/attribute value? And then, how would you pick up the attribute per $ID in the copySOP as a variable value for use upstream in a stamp function?
Any help much appreciated,
thank you.
Technical Discussion » openEXR in COPS
- frankvw
- 61 posts
- Offline
Hi,
cheers, thanks for that. I though he meant more ‘load’ type settings. Again, please excuse my ignorance, I never really deal with anything much else than linear images I render myself. Is there a thing for exr images like you have to set in cineon files where you need to have correct white point, black point, film gamma, LUT files, all that sort of thing, for the images to correctly match settings for the film scanning/recording equipment it came from? I know if you don't have the settings for that stuff with cineon and DPX files it can all go really wrong, but I can't see those kind of things for scanned exr files. They are just like having tiff or targa files, right? Do I need to specify that the files are 16 bit or floating point or whatever? Or does that just get picked up in the CoP and I don't need to worry about all the 2d film gamma stuff?
cheers, thanks for that. I though he meant more ‘load’ type settings. Again, please excuse my ignorance, I never really deal with anything much else than linear images I render myself. Is there a thing for exr images like you have to set in cineon files where you need to have correct white point, black point, film gamma, LUT files, all that sort of thing, for the images to correctly match settings for the film scanning/recording equipment it came from? I know if you don't have the settings for that stuff with cineon and DPX files it can all go really wrong, but I can't see those kind of things for scanned exr files. They are just like having tiff or targa files, right? Do I need to specify that the files are 16 bit or floating point or whatever? Or does that just get picked up in the CoP and I don't need to worry about all the 2d film gamma stuff?
-
- Quick Links