Execution Provider

Determines which ONNX execution provider to use for inference. By default, the node attempts to pick the best available provider and prefers to use GPU acceleration.

Automatic

Chooses the best provider for the current system. This option prioritizes CUDA if installed, DirectML/CoreML as a fallback depending on the platform, and CPU inference if no GPU provider is available.

CPU

Perform inference using on the CPU.

CUDA

Perform inference using CUDA/cuDNN. CUDA and cuDNN must be installed using the packages provided by NVIDIA.

DirectML

Only available on Windows. Performs inference using the Windows Direct Machine Learning library.

CoreML

Only available on macOS. Performs inference using Apple’s Core ML library.

Geometry nodes