Houdini 21.0 Nodes Geometry nodes

ML Volume Tile Inference geometry node

Runs an ONNX model on volume tiles

On this page
Since 21.0

Overview

Volume Tile Inference allows you to run an ONNX model on individual tiles in series or in parallel, while controlling the inference device (GPU & CPU) and the execution provider. The SOP streamlines the tile creation and reassembly process and works in tandem with the ML Volume Tile Component SOP.

The node takes a VDB or native Houdini volume as an input.

Parameters

Model File

The model file that is used for inference.

Reload Model

Force a reload of the .onnx file.

Batch Mode

Choose between Multiple Packed and Single (Compiled CPU) mode. Generally Multiple Packed + GPU is the preferred mode, but Single Compiled (CPU) allows for acceptable inference speed on the CPU, when no GPU is available.

Execution Provider

Determines which ONNX execution provider to use for inference. By default, the node attempts to pick the best available provider and prefers to use GPU acceleration.

Automatic

Chooses the best provider for the current system. This option prioritizes CUDA if installed, DirectML/CoreML as a fallback depending on the platform, and CPU inference if no GPU provider is available.

CPU

Perform inference using on the CPU.

CUDA

Perform inference using CUDA/cuDNN. CUDA and cuDNN must be installed using the packages provided by NVIDIA.

DirectML

Only available on Windows. Performs inference using the Windows Direct Machine Learning library.

CoreML

Only available on macOS. Performs inference using Apple’s Core ML library.

Volume

The volume primitives to run inference on.

Tile Settings

Input Fields

The amount of fields that will be tiled and given to the inference node.

Tile Size

The size of the tile the node will crop.

Input Padding

The amount of voxel padding to be added on each side of the tile before inference.

Output Padding

The amount of voxel padding to be removed from each side of the tile after inference.

Post Processing

Prune Voxels

Activates voxel pruning.

Prune Tolerance

Defines the pruning threshold.

Inputs

Input Volume

One or multiple volume fields to run inference on and any extra fields.

Outputs

Output Volume

A volume with the inference results applied and any extra fields.

See also

Geometry nodes