Marc Albrecht


About Me



My Tutorials

obj-image Quick Tips
Quicktip: Tearing Cloth Setup
obj-image Quick Tips
Using the Treeview for Rigging
obj-image Beginner
Walkthrough: Cloth Sim for a Movie Shot
obj-image Quick Tips
Getting Weightmaps into H16
obj-image Quick Tips
Wind for Cloth Sims in H16
obj-image Quick Tips
Adding a Flipbook Framecounter

Recent Forum Posts

Cloth basics on animated character with vellum Jan. 2, 2019, 3 a.m.


have a look at this great video from Jeff: []
He shows how to blend from T-Pose into a walk-cycle, I guess that covers everything you need.


Not working geo2_deform Dec. 9, 2018, 4:31 a.m.


I just found this discussion - could have spared me some time if I had seen it before :-)

Yes, the problem is that the guide-groom-node in your setup is “locking” the grooming to one single frame. H's documentation even mentions that the guide-groom-node expects static geometry. If you want the grooming to follow your animation, you need to groom on the rest geometry and then have a deform-node make the output follow the skin's (the animated geometry's) deformation.
I find that more than clumsy to set up, so I didn't dive deeper into your scene - the solution you show in the video is kind of the same I came up with (bypassing the lock-down-groom node).


3 point align of 2 similar meshes (like in Maya) Oct. 23, 2018, 3:03 p.m.

Hi, Olivier,

it's been ages that I used Photoscan (not because it's bad, but simply because of dev requirements) - so I cannot tell for sure if Photoscan has a world-lock-function like RealityCapture.

The idea is this: When you do photogrammetry, the points in space you get have no “origin”, even if you have a metric cube in your scanned scene, you first have to tell your pipeline, which points (on the cube) belong to the reference system. In RC this is what ground points are for: By defining points (2d on images, relating to 3d in the point cloud) to sit at given coordinates in your world space, the model you get “simply” gets rotated/scaled/transformed to that universe. Actually, it doesn't get transformed, it's just that the point coordinates are matched up.
In your case - with a rotating object - you'd need to adjust the reference (world space) accordingly, meaning, you'd need to rotate the rock in a “known way”, so that your ground points rotate along. I'm certain that photoscan has something like this “hidden somewhere”
Since this *should* be part of any photogrammetry pipeline that works with “interlocking” meshes (aligning from one scan to another or, bob beware, even combining Lidar with Ph*metry), aligning meshes - in theory - “is part of the process”.
(I say “in theory”, because in reality ph*metry comes with its own pitfalls in terms of large-model-precision (floating point issues, from my perspective, being the most likely reason).)