HDK
|
CUDA Provider Options. More...
#include <onnxruntime_c_api.h>
Public Attributes | |
int | device_id |
CUDA device Id Defaults to 0. More... | |
OrtCudnnConvAlgoSearch | cudnn_conv_algo_search |
CUDA Convolution algorithm search configuration. See enum OrtCudnnConvAlgoSearch for more details. Defaults to OrtCudnnConvAlgoSearchExhaustive. More... | |
size_t | gpu_mem_limit |
CUDA memory limit (To use all possible memory pass in maximum size_t) Defaults to SIZE_MAX. More... | |
int | arena_extend_strategy |
Strategy used to grow the memory arena 0 = kNextPowerOfTwo 1 = kSameAsRequested Defaults to 0. More... | |
int | do_copy_in_default_stream |
Flag indicating if copying needs to take place on the same stream as the compute stream in the CUDA EP 0 = Use separate streams for copying and compute. 1 = Use the same stream for copying and compute. Defaults to 1. WARNING: Setting this to 0 may result in data races for some models. Please see issue #4829 for more details. More... | |
int | has_user_compute_stream |
Flag indicating if there is a user provided compute stream Defaults to 0. More... | |
void * | user_compute_stream |
User provided compute stream. If provided, please set has_user_compute_stream to 1. More... | |
OrtArenaCfg * | default_memory_arena_cfg |
CUDA memory arena configuration parameters. More... | |
int | tunable_op_enabled |
Enable TunableOp. Set it to 1 to enable TunableOp. Otherwise, it is disabled by default. This option can be superseded by environment variable ORT_CUDA_TUNABLE_OP_ENABLED. More... | |
CUDA Provider Options.
Definition at line 370 of file onnxruntime_c_api.h.
int OrtCUDAProviderOptions::arena_extend_strategy |
Strategy used to grow the memory arena 0 = kNextPowerOfTwo
1 = kSameAsRequested
Defaults to 0.
Definition at line 407 of file onnxruntime_c_api.h.
OrtCudnnConvAlgoSearch OrtCUDAProviderOptions::cudnn_conv_algo_search |
CUDA Convolution algorithm search configuration. See enum OrtCudnnConvAlgoSearch for more details. Defaults to OrtCudnnConvAlgoSearchExhaustive.
Definition at line 393 of file onnxruntime_c_api.h.
OrtArenaCfg* OrtCUDAProviderOptions::default_memory_arena_cfg |
CUDA memory arena configuration parameters.
Definition at line 430 of file onnxruntime_c_api.h.
int OrtCUDAProviderOptions::device_id |
CUDA device Id Defaults to 0.
Definition at line 387 of file onnxruntime_c_api.h.
int OrtCUDAProviderOptions::do_copy_in_default_stream |
Flag indicating if copying needs to take place on the same stream as the compute stream in the CUDA EP 0 = Use separate streams for copying and compute. 1 = Use the same stream for copying and compute. Defaults to 1. WARNING: Setting this to 0 may result in data races for some models. Please see issue #4829 for more details.
Definition at line 416 of file onnxruntime_c_api.h.
size_t OrtCUDAProviderOptions::gpu_mem_limit |
CUDA memory limit (To use all possible memory pass in maximum size_t) Defaults to SIZE_MAX.
Definition at line 399 of file onnxruntime_c_api.h.
int OrtCUDAProviderOptions::has_user_compute_stream |
Flag indicating if there is a user provided compute stream Defaults to 0.
Definition at line 421 of file onnxruntime_c_api.h.
int OrtCUDAProviderOptions::tunable_op_enabled |
Enable TunableOp. Set it to 1 to enable TunableOp. Otherwise, it is disabled by default. This option can be superseded by environment variable ORT_CUDA_TUNABLE_OP_ENABLED.
Definition at line 436 of file onnxruntime_c_api.h.
void* OrtCUDAProviderOptions::user_compute_stream |
User provided compute stream. If provided, please set has_user_compute_stream
to 1.
Definition at line 426 of file onnxruntime_c_api.h.