Go to the source code of this file.
| Enumerator |
|---|
| AUTOMATIC |
Automatically determine the best provider based on the platform and hardware available
|
| CPU |
CPU inference provider, works on all platforms.
|
| CUDA |
CUDA/cudNN inference provider, works on Windows and Linux assuming the GPU driver is new enough and cudNN is installed on the system
|
| DIRECTML |
Uses DirectML, only supported on Windows.
|
| COREML |
Uses CoreML, only supported on macOS.
|
| COUNT |
The total number of providers.
|
Definition at line 17 of file ML_Types.h.