|On this page|
The objective of this step is to create a dense geometric surface representation of the scene. The output of this node is a high-poly mesh, which can be used for any further processing in Houdini.
You can provide a bounding box around the output pointcloud of the
AV Depth Map node to isolate regions of the reconstructed scene to refine. Simply feed in a box (no other shape) into the second input.
Start the cooking process for this step.
This toggle controls if the node should automatically recook if any dependencies have changed.
This toggle controls if the status of the current node should be printed to the console. This is useful for getting a quick overview of the progress.
Max Input Points
Max input points loaded from depth map images.
Max points at the end of the depth maps fusion.
Max Points per Voxel
Max points per voxel.
The step used to load depth values from depth maps is computed from maxInputPts. Here we define the minimal value for this step, so on small datasets we will not spend too much time at the beginning loading all depth values.
Whether to colorize output dense point cloud and mesh.
Refine depth map fusion with the new pixels size defined by angle and similarity scores.
Estimate Space From SfM
Estimate the 3d space from the SfM.
Min Observations For SfM Space Estimation
Minimum number of observations for SfM space estimation.
Min Observations Angle For SfM Space Estimation
Minimum angle between two observations for SfM space estimation.
Add Landmarks To The Dense Point Cloud
Add SfM Landmarks to the dense point cloud.
Filter Large Triangles Factor
Remove all large triangles. We consider a triangle as large if one edge is bigger than N times the average edge length. Put zero to disable it.
Keep Only the Largest Mesh
Keep only the largest connected triangles group.
Num of Iterations
Number of smoothing iterations.
The environment used for launching the AliceVision utilities commandline. Note that this is a python expression and should be modified only through “Edit Expression”.
AV Depth Map
This should get the output of AV Depth Map.
This can optionally receive a bounding box to refine the region that should be reconstructed. This is only allowed to be a transformed box, and nothing else.
AV Depth Map
This plugs into AV Texturing, and holds the generated mesh.
This is the pointcloud generated by the meshing step.