UV to world position
30472 16 4- mzigaib
- Member
- 948 posts
- Joined: April 2008
- Offline
- mzigaib
- Member
- 948 posts
- Joined: April 2008
- Offline
- Mario Marengo
- Member
- 941 posts
- Joined: July 2005
- Offline
- mzigaib
- Member
- 948 posts
- Joined: April 2008
- Offline
Thanks you for replying.
I am trying to stick my points from my rest object to my deforming object using the UVs, so I could transfer the world position from the UVs on my deform object to my rest points.
I don't understand why is not easy to have this conversion, I can see the data on the details view all it needs is to convert the UV´s from that object in world position space.
Anyway I couldn't find a way to do that conversion but for who is interested I found a way to do stick points to deformed objects on other forum, but the conversion issue stays open.
http://forums.odforce.net/index.php?/topic/11577-world-position-from-uv-texture-coordinates/page__gopid__73927&#entry73927 [forums.odforce.net]
Anyway thanks for the feedback.
I am trying to stick my points from my rest object to my deforming object using the UVs, so I could transfer the world position from the UVs on my deform object to my rest points.
I don't understand why is not easy to have this conversion, I can see the data on the details view all it needs is to convert the UV´s from that object in world position space.
Anyway I couldn't find a way to do that conversion but for who is interested I found a way to do stick points to deformed objects on other forum, but the conversion issue stays open.
http://forums.odforce.net/index.php?/topic/11577-world-position-from-uv-texture-coordinates/page__gopid__73927&#entry73927 [forums.odforce.net]
Anyway thanks for the feedback.
- Mario Marengo
- Member
- 941 posts
- Joined: July 2005
- Offline
Ah. I see.
Texture UV's are probably not the best choice for parameterizing 3D geometry because they're frequently not unique. All of the uv projections, with the exception of unwrap (and sometimes pelt) can lead to non-unique uv coordinates. And to be able to do a reverse mapping from uv's to position, the first requirement would be that the mapping be unique – any given uv maps to one (and only one) P.
With that in mind, a hypothetical texture-uv-to-pos operator would first need to enforce a unique uv projection per element (point/vertex), likely discarding your pre-assigned uv's and redefining the topology of the deformer geo (to allow it to work with points instead of vertices for example) in the process. Once the rest-geo and point cloud are both in this well-defined uv space, it can finally assign some form of interpolating function to each pc-point to actually carry out the uv-space-to-some-other-unique-attribute-space mapping (like uv to P).
At this point, the interpolation will likely come from one of the following groups: 1) A simple kernel-based reconstruction filter, likely isotropic, such as metaballs, or 2) A more sophisticated (and toplogy-aware) refinement of #1, like Generalized Barycentric Coordinates or anisotropic filters. Lots of choices in both groups.
By this point you start thinking “Why bother with this crippled 2D texture uv space at all?”. If the intent is just to track arbitrary points on a deforming geometry, then why not just stick with the “rest” space where these things were well defined to begin with? The only real issue here, once the smoke clears, is finding that interpolation function, but that has to be solved for any chosen space – and the only dubious distinction of texture UV's is that they have one less dimension than the space you want to map to…i.e: not helpful.
So… from the available tools in Houdini:
1) The LatticeSOP (in point mode) can do the type1 (metaball-kernel-based) interpolation.
2) Point clouds, and their associated pcfilter() function, do a type1 interpolation as well.
3) As of H11, there's a shading-context-only vex function available, called sample_geometry(), which can do a much more accurate mapping because it stores both the micropoly ID and the parametric UV where each pc-point came from, so a “primuv()”-type mapping can be used directly. Not available for SOPs yet (though I would imagine the ScatterSOP could maybe be enhanced to produce primID+parametricUV attributes…SESI?), but good to know if the final use of these points of yours is only related to shading (see $HH/vex/include/physicalsss.h for sample usage).
4) Simon started a few threads in odForce relating to type2 interpolation (specifically, generalized barycentric coords), along with some implementations, that you may want to search for.
5)… there're likely more options that I'm forgetting about… CHOPs?.
Attached is a solution using the LatticeSop approach – press play to see the deformation tracking. This could be done with a single constant kernel radius and still probably work, but I added some per-point radius calculations to make it a tiny bit more robust – though these are isotropic kernels, so very elongated faces would cause problems regardless…
Cheers.
Texture UV's are probably not the best choice for parameterizing 3D geometry because they're frequently not unique. All of the uv projections, with the exception of unwrap (and sometimes pelt) can lead to non-unique uv coordinates. And to be able to do a reverse mapping from uv's to position, the first requirement would be that the mapping be unique – any given uv maps to one (and only one) P.
With that in mind, a hypothetical texture-uv-to-pos operator would first need to enforce a unique uv projection per element (point/vertex), likely discarding your pre-assigned uv's and redefining the topology of the deformer geo (to allow it to work with points instead of vertices for example) in the process. Once the rest-geo and point cloud are both in this well-defined uv space, it can finally assign some form of interpolating function to each pc-point to actually carry out the uv-space-to-some-other-unique-attribute-space mapping (like uv to P).
At this point, the interpolation will likely come from one of the following groups: 1) A simple kernel-based reconstruction filter, likely isotropic, such as metaballs, or 2) A more sophisticated (and toplogy-aware) refinement of #1, like Generalized Barycentric Coordinates or anisotropic filters. Lots of choices in both groups.
By this point you start thinking “Why bother with this crippled 2D texture uv space at all?”. If the intent is just to track arbitrary points on a deforming geometry, then why not just stick with the “rest” space where these things were well defined to begin with? The only real issue here, once the smoke clears, is finding that interpolation function, but that has to be solved for any chosen space – and the only dubious distinction of texture UV's is that they have one less dimension than the space you want to map to…i.e: not helpful.
So… from the available tools in Houdini:
1) The LatticeSOP (in point mode) can do the type1 (metaball-kernel-based) interpolation.
2) Point clouds, and their associated pcfilter() function, do a type1 interpolation as well.
3) As of H11, there's a shading-context-only vex function available, called sample_geometry(), which can do a much more accurate mapping because it stores both the micropoly ID and the parametric UV where each pc-point came from, so a “primuv()”-type mapping can be used directly. Not available for SOPs yet (though I would imagine the ScatterSOP could maybe be enhanced to produce primID+parametricUV attributes…SESI?), but good to know if the final use of these points of yours is only related to shading (see $HH/vex/include/physicalsss.h for sample usage).
4) Simon started a few threads in odForce relating to type2 interpolation (specifically, generalized barycentric coords), along with some implementations, that you may want to search for.
5)… there're likely more options that I'm forgetting about… CHOPs?.
Attached is a solution using the LatticeSop approach – press play to see the deformation tracking. This could be done with a single constant kernel radius and still probably work, but I added some per-point radius calculations to make it a tiny bit more robust – though these are isotropic kernels, so very elongated faces would cause problems regardless…
Cheers.
- Anonymous
- Member
- 678 posts
- Joined: July 2005
- Offline
- Pagefan
- Member
- 519 posts
- Joined:
- Offline
- tjeeds
- Member
- 339 posts
- Joined: Aug. 2007
- Offline
Not available for SOPs yet (though I would imagine the ScatterSOP could maybe be enhanced to produce primID+parametricUV attributes…SESI?)
YESSSSSS!
This functionality is available through a VOPSOP. If your scatter geo has normals the scattered points will pick them up and you can use them to bias and ray cast the scattered points back onto the original geo, this will give you prim id and parametric uv's. This is easily wrapped up into an asset BUT it would be extremely awesome if the Scatter SOP had a toggle to just get these automatically.
This stuff is so handy, especially when using the Creep POP.
Jesse Erickson
Fx Animator
WDAS
Fx Animator
WDAS
- Mario Marengo
- Member
- 941 posts
- Joined: July 2005
- Offline
tjeeds
This functionality is available through a VOPSOP. If your scatter geo has normals the scattered points will pick them up and you can use them to bias and ray cast the scattered points back onto the original geo, this will give you prim id and parametric uv's.
I thought that, in SOP-land at least, parametric uv's were only defined for parametric surfaces (mesh, NURB, Bezier); and that for polys (faces, curves) you could only get u (no v)…
Are you able to get meaningful uv's (that is, the kind you can remap using primuv() for example) out of polys?
At shading time, the story's a little different… so I'm curious how far you've been able to take it in SOPs.
- tjeeds
- Member
- 339 posts
- Joined: Aug. 2007
- Offline
- Mario Marengo
- Member
- 941 posts
- Joined: July 2005
- Offline
tjeeds
Yep, (unless i am misusing terminology) ray casting onto a polygon gives you the uv position on each poly. Works great for sticking things to deforming objects or prepping points for creeping in POPs.
Huh. OK; I guess I'm not sure what you mean by “ray casting” then (ray sop?). If you could post a simple example, it would clear things up.
…but you got me curious about the kind of “parametric UVs” produced by xyzdist() (hou.Geometry.nearestPrim()) and whether they can be fed directly (or not) to the primuv() (hou.Prim.positionAt()) function. At first glance, the xyzdist()/primuv() combo seems to be made for this sort of thing, and yet…
First of all, quadratic surfaces (primitive sphere, tube, etc) are not supported, so those are out, but that's ok for the typical application of this stuff.
For mesh, nurb, and bezier surfaces, the mapping works as you'd expect and you can do the 2-way mapping pretty accurately (except there seems to be an offset in V for meshes for some reason).
Now Polys… not so straight forward.
I noticed that, even though xyzdist() always returns a uv pair, even for polys, the primuv() function doesn't know how to interpret them – it just uses the U and discards the V (even if the poly is closed), and the Python versions won't even accept a V (for the hou.primType.Polygon/Face type).
Then I noticed that these polygon UV pairs (returned by xyzdist()) seemed to make sense only when you interpret the 2 edges coming out of point #0 as though they were the basis vectors of a 2D frame… at least for triangles and quads, but not for n-gons.
After a little more digging, I noticed that in the case of polys with sides>4 (what I'm calling n-gons), the parameterization becomes radial, with U along the perimeter (in vertex order) and V toward the centroid.
Anyway. I put all these observations together into a couple of PythonSOP assets that attempt to do in SOPs, what the sample_geometry() function does in the shading context, though it doesn't support primitives, only poly, poly-mesh, mesh, nurbs, and bezier surfaces.
1. The ScatterCapture SOP stashes the primID and parametric UVs (as returned by the Python equiv of xyzdist()) into a point attribute called “puv”, and
2. The ScatterDeform SOP reads this attribute and performs the reverse mapping, taking into account the different UV interpretations mentioned.
These could probably be optimized quite a bit by pre-calculating some of the stuff at the capture stage, but this is just a proof of concept at the moment.
Cheers.
- tjeeds
- Member
- 339 posts
- Joined: Aug. 2007
- Offline
Wow, okay, that's a lot of info. I'll have to look through it tonight but i quickly put together a file illustrating what i'm talking about. It's what I used to use xyzdist() for, but it's much, much faster.
As for ray casting, I have been using the Intersect vop in conjunction with the Primitive Attribute vop to reproduce anything the Ray sop does. This is super handy when you want to ray cast in POPs, for instance.
I honestly haven't tried this with anything other than quads and tris so you may be correct about that, but it appears to work beautifully for them.
As for ray casting, I have been using the Intersect vop in conjunction with the Primitive Attribute vop to reproduce anything the Ray sop does. This is super handy when you want to ray cast in POPs, for instance.
I honestly haven't tried this with anything other than quads and tris so you may be correct about that, but it appears to work beautifully for them.
Jesse Erickson
Fx Animator
WDAS
Fx Animator
WDAS
- edward
- Member
- 7715 posts
- Joined: July 2005
- Offline
- Mario Marengo
- Member
- 941 posts
- Joined: July 2005
- Offline
tjeeds
As for ray casting, I have been using the Intersect vop
VEX intersect()! <slaps forehead>
I have never used this function in vex so here I was, wondering how you were getting parametric UVs out of the RaySOP
Definitely the way to go with polys – and thanks to VEX, it's much, much faster than the Python approach. Thanks for sharing this!
The only weaknesses are due mostly to intersect() missing hits on non-tri-poly surfaces (I'm guessing it's the same code as the RaySOP which suffers from the same problems). But for polys… oh yeah, it's a beauty.
I will submit the following in a bug report later, but I'm seeing the same re-mapping problems in the VEX version that I saw in the Python one, for parametric surfaces:
* For mesh, nurb, and bezier: either the forward (xyzdist() equiv) or backward (primuv() equiv) portion of the mapping is broken (mesh doesn't cover the entire V domain, etc).
And I'll also add my voice to tjeeds' RFE: Let's get this point cloud capture/deform stuff (a version of sample_geometry) working natively in SOPs, please.
@Edward: Thanks, that's a nice solution too – kind'a like the lattice method above. Unfortunately they both have some slippage going on (the wiredeform more so than the lattice for some weird reason – I would have expected the opposite).
Cheers.
- seaparticle
- Member
- 1 posts
- Joined: May 2011
- Offline
- Alejandro Echeverry
- Member
- 691 posts
- Joined: June 2006
- Offline
Feel The Knowledge, Kiss The Goat!!!
http://www.linkedin.com/in/alejandroecheverry [linkedin.com]
http://vimeo.com/lordpazuzu/videos [vimeo.com]
http://www.linkedin.com/in/alejandroecheverry [linkedin.com]
http://vimeo.com/lordpazuzu/videos [vimeo.com]
- sl0throp
- Member
- 258 posts
- Joined:
- Offline
-
- Quick Links