Houdini 21.0 Nodes Geometry nodes

Neural Point Surface 1.0 geometry node

Turns a point cloud into a VDB surface using a pretrained convolutional neural network.

On this page
Since 21.0

Overview

The Neural Point Surface takes a point cloud as input and reconstructs a VDB surface from it. It contains multiple specialized pretrained models to yield specific looks depending on the material being surfaced. In general, this node should allow users to reconstruct smooth surfaces while preserving the sharp, high frequency details described by the point cloud.

Tip

By default the node will try to execute on the GPU using CUDA, so make sure you have both CUDA 12.8 and cuDNN 9.x installed to get the best performance if you have a Nvidia card in your machine. The node will fall back to using DirectML on Windows and CoreML on macOS if CUDA is unavailable, If no GPU engine is found, the CPU will be used to perform inference. Keep in mind that performing inference on the CPU will be extremely slow compared to its GPU counterpart. The waste of performance of running this on the CPU could quickly outweigh its advantages. As a point of comparison, the performance should be similar to other surfacing methods in Houdini when using a Nvidia 4090.

For more information, see the Neural point surfacing page in the MPM chapter.

Inputs

Points to Surface

The points to derive the surface from.

Parameters

Overwrite Voxel Size

By the default, the node will estimate the appropriate voxel size based on the point cloud distribution. Enable this if you want to overwrite the voxel size manually.

Voxel Size

Defines the voxel size of the generated surface grid manually.

Voxel Scale

A multiplier on the defined voxel size. This allows the user to bias the estimated voxel size without completely overwriting it.

Neural Model

Defines which pretrained models to use to perform inference. They all have a different look.

Balanced

This is the default. It should perform well in most scenarios and is likely to give the best results when sharp features of solid models must be preserved.

Smooth

Optimized for smooth surface reconstruction. This model will lack high frequency details compared to the Balanced model, but can get rid of more bumps across smooth surfaces.

Liquid

Optimized to reconstruct liquid simulations. This model will preserve smooth curved surfaces, as well as sharp transitions of waves crashing and rolling. It is also good at reconstructing thin sheets of water by connecting nearby water droplets in a temporally stable manner.

Granular

Optimized to properly capture rough and granular surfaces. This model will try to ignore the loose points near dense surfaces to reduce fuzziness and get more defined shapes while preserving loose points that are fully detached from the surfaces. This is especially useful with Chunky MPM materials.

Custom

Use a custom model that does not ship with Houdini.

ONNX Model Path

The file path of the custom ONNX model.

Force Closed SDF

Makes sure the surface is well formed and does not contain holes from missing zero crossings.

Advanced

Enable Partitioning

Enable partitioning such that large point clouds can be processed on OpenCL devices with limited amount of memory.

Note

This option should only be turned on if your OpenCL device (GPU) is running out of memory while cooking the node. When activated, the pre-processing and post-processing tasks before and after inference are divided into smaller chunks to reduce peak memory consumption. However, this may significantly degrade performance and should remain turned off unless necessary.

Partition Max Size

Recursively subdivides the input point cloud until no single partition exceeds this amount of points.

Display Guides

Visualize the point cloud partitioning.

Execution Provider

Determines which ONNX execution provider to use for inference. By default, the node will attempt to pick the best available provider and will prefer to use GPU acceleration if possible.

Automatic

Chooses the best provider for the current system. This option will prioritize using CUDA if it it’s installed, uses DirectML/CoreML as a fallback depending on the platform, and uses CPU inference if no GPU provider is available.

CPU

Perform inference using on the CPU.

CUDA

Perform inference using CUDA/cuDNN. CUDA and cuDNN must be installed using the packages provided by NVIDIA.

DirectML

Only available on Windows. Performs inference using the Windows Direct Machine Learning library.

CoreML

Only available on macOS. Performs inference using Apple’s Core ML library.

See also

Geometry nodes