Search - User list
Full Version: 3 point align of 2 similar meshes (like in Maya)
Root » Work in Progress » 3 point align of 2 similar meshes (like in Maya)
olivierth
Hi!

I've been working with photoscanned data a lot and I'm using Houdini to try and process them (clean/align/delight/etc.). I often scan small props twice. I scan the top, flip it and scan the bottom. The result is 2 meshes of different scale/rotation/translation/topology.

My tool automatically moves, rotates and scales the meshes to align them precisely. All the user has to do is place/snap 3 points on the first mesh and do the same thing for the second mesh. The order that you use to place the points on the second mesh doesn't matter at all.
kahuna031
Nice Olivierth. Can't try at the moment but have you looked at driving this with the ‘python states’ in H17?

I'm also into photogrammetry and looking for best solution to these problems.
malbrecht
Hi,

haven't looked at the HIP (but will do ASAP), but:

> The result is 2 meshes of different scale/rotation/translation/topology.

… if you are using a tool like RealityCapture, this should not happen if your workflow is set up correctly. Groundpoints are used to make sure that your (arbitrary) local space gets mapped to a common global space correctly, so that (small) objects align pixel-perfect.
With large scale objects (landscapes of >400m with fine detail) the nature of photogrammetry may still introduce issues (lens calibration errors, floating point limitations etc), but everything that fits inside your room should be fine.

Marc
olivierth
Thanks kahuna031! The only code I know is vex so I'll stick to that for now. Wow, you're working at DICE? You guys do insane photogrametry work!



malbrecht: I'm using Photoscan and I often:

1. scan one side of a small rock
2. physically flip the rock upside down
3. scan again

Are you saying my software could potentially do both process and match them automatically by figuring out the floor plane? No matter if it does or not, I was really curious to see if I could pull it off on my own (with the help of this community, of course!)

-Olivier
malbrecht
Hi, Olivier,

it's been ages that I used Photoscan (not because it's bad, but simply because of dev requirements) - so I cannot tell for sure if Photoscan has a world-lock-function like RealityCapture.

The idea is this: When you do photogrammetry, the points in space you get have no “origin”, even if you have a metric cube in your scanned scene, you first have to tell your pipeline, which points (on the cube) belong to the reference system. In RC this is what ground points are for: By defining points (2d on images, relating to 3d in the point cloud) to sit at given coordinates in your world space, the model you get “simply” gets rotated/scaled/transformed to that universe. Actually, it doesn't get transformed, it's just that the point coordinates are matched up.
In your case - with a rotating object - you'd need to adjust the reference (world space) accordingly, meaning, you'd need to rotate the rock in a “known way”, so that your ground points rotate along. I'm certain that photoscan has something like this “hidden somewhere”
Since this *should* be part of any photogrammetry pipeline that works with “interlocking” meshes (aligning from one scan to another or, bob beware, even combining Lidar with Ph*metry), aligning meshes - in theory - “is part of the process”.
(I say “in theory”, because in reality ph*metry comes with its own pitfalls in terms of large-model-precision (floating point issues, from my perspective, being the most likely reason).)

Marc
olivierth
Thanks for the info. I'll have a look. …It might also be that my “Standard Edition” Photoscan has that option locked.
This is a "lo-fi" version of our main content. To view the full version with more information, formatting and images, please click here.
Powered by DjangoBB