HDK
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Macros Groups Pages
onnxruntime::Tensor Class Referencefinal

#include <tensor.h>

Public Member Functions

 Tensor ()=default
 
 Tensor (MLDataType elt_type, const TensorShape &shape, void *p_data, const OrtMemoryInfo &location, ptrdiff_t offset=0, gsl::span< const int64_t > strides={})
 
 Tensor (MLDataType elt_type, const TensorShape &shape, void *p_data, std::shared_ptr< IAllocator > deleter, ptrdiff_t offset=0, gsl::span< const int64_t > strides={})
 
 Tensor (MLDataType elt_type, const TensorShape &shape, std::shared_ptr< IAllocator > allocator)
 Create a Tensor that allocates and owns the buffer required for the specified shape. More...
 
 ~Tensor ()
 
 ORT_DISALLOW_COPY_AND_ASSIGNMENT (Tensor)
 
 Tensor (Tensor &&other) noexcept
 
Tensoroperator= (Tensor &&other) noexcept
 
MLDataType DataType () const
 
int32_t GetElementType () const
 
bool IsDataTypeString () const
 
template<class T >
bool IsDataType () const
 
const TensorShapeShape () const noexcept
 
const OrtMemoryInfoLocation () const
 
template<typename T >
T * MutableData ()
 
template<typename T >
gsl::span< T > MutableDataAsSpan ()
 
template<typename T >
const T * Data () const
 
template<typename T >
gsl::span< const T > DataAsSpan () const
 
voidMutableDataRaw (MLDataType type)
 
const voidDataRaw (MLDataType type) const
 
voidMutableDataRaw () noexcept
 
const voidDataRaw () const noexcept
 
bool OwnsBuffer () const noexcept
 
void Reshape (const TensorShape &new_shape)
 
ptrdiff_t ByteOffset () const
 
void SetByteOffset (ptrdiff_t byte_offset)
 
int64_t NumStorageElements () const
 The number of Tensor "storage" elements. A single storage element may contain multiple sub-elements for sub-byte data types (e.g., int4). More...
 
size_t SizeInBytes () const
 

Static Public Member Functions

static void InitOrtValue (MLDataType elt_type, const TensorShape &shape, void *p_data, const OrtMemoryInfo &location, OrtValue &ort_value, ptrdiff_t offset=0, gsl::span< const int64_t > strides={})
 Creates an instance of Tensor on the heap and initializes OrtValue with it. More...
 
static void InitOrtValue (MLDataType elt_type, const TensorShape &shape, void *p_data, std::shared_ptr< IAllocator > allocator, OrtValue &ort_value, ptrdiff_t offset=0, gsl::span< const int64_t > strides={})
 Creates an instance of Tensor on the heap which will take over ownership of the pre-allocated buffer. More...
 
static void InitOrtValue (MLDataType elt_type, const TensorShape &shape, std::shared_ptr< IAllocator > allocator, OrtValue &ort_value)
 Creates an instance of Tensor on the heap and initializes OrtValue with it. The Tensor instance will allocate and own the data required for shape. More...
 
static void InitOrtValue (Tensor &&tensor, OrtValue &ort_value)
 Initializes OrtValue with an existing Tensor. More...
 
static size_t CalculateTensorStorageSize (MLDataType elt_type, const TensorShape &shape)
 Calculate the required storage for the tensor. More...
 
static Status CalculateTensorStorageSize (MLDataType elt_type, const TensorShape &shape, size_t alignment, size_t &storage_size)
 Calculate the required storage for the tensor. More...
 

Detailed Description

Definition at line 39 of file tensor.h.

Constructor & Destructor Documentation

onnxruntime::Tensor::Tensor ( )
default
onnxruntime::Tensor::Tensor ( MLDataType  elt_type,
const TensorShape shape,
void p_data,
const OrtMemoryInfo location,
ptrdiff_t  offset = 0,
gsl::span< const int64_t >  strides = {} 
)

Create tensor with given type, shape, pre-allocated memory and allocator info. This function does not check if the preallocated buffer(p_data) has enough room for the shape.

Parameters
elt_typeData type of the tensor elements.
shapeShape of the tensor
p_dataA preallocated buffer. Can be NULL if the shape is empty. Tensor does not own the data and will not delete it
locationMemory info for location of p_data.
offsetOffset in bytes to start of Tensor within p_data.
stridesStrides span. Can be empty if the tensor is contiguous.
onnxruntime::Tensor::Tensor ( MLDataType  elt_type,
const TensorShape shape,
void p_data,
std::shared_ptr< IAllocator deleter,
ptrdiff_t  offset = 0,
gsl::span< const int64_t >  strides = {} 
)

Create tensor with given type, shape, pre-allocated memory and allocator which will be used to free the pre-allocated memory. The Tensor will take over ownership of p_data. This function does not check if the preallocated buffer(p_data) has enough room for the shape.

Parameters
elt_typeData type of the tensor elements.
shapeShape of the tensor
p_dataA preallocated buffer. Can be NULL if the shape is empty. Tensor will own the memory and will delete it when the tensor instance is destructed.
deleterAllocator used to free the pre-allocated memory
offsetOffset in bytes to start of Tensor within p_data.
stridesStrides span. Can be empty if the tensor is contiguous.
onnxruntime::Tensor::Tensor ( MLDataType  elt_type,
const TensorShape shape,
std::shared_ptr< IAllocator allocator 
)

Create a Tensor that allocates and owns the buffer required for the specified shape.

Parameters
elt_typeData type of the tensor elements.
shapeTensor shape.
allocatorAllocator to use to create and free buffer.
onnxruntime::Tensor::~Tensor ( )
onnxruntime::Tensor::Tensor ( Tensor &&  other)
noexcept

Member Function Documentation

ptrdiff_t onnxruntime::Tensor::ByteOffset ( ) const
inline

Get the byte offset with respect to the p_data

Warning
this is a temporary solution for reusing the buffer bigger than needed.
use with caution - make sure you do boundary check before calling this method (see view.cc)

Definition at line 273 of file tensor.h.

static size_t onnxruntime::Tensor::CalculateTensorStorageSize ( MLDataType  elt_type,
const TensorShape shape 
)
static

Calculate the required storage for the tensor.

Parameters
elt_typeData type of the tensor elements.
shapeTensor shape.
Returns
Bytes required.
static Status onnxruntime::Tensor::CalculateTensorStorageSize ( MLDataType  elt_type,
const TensorShape shape,
size_t  alignment,
size_t &  storage_size 
)
static

Calculate the required storage for the tensor.

Parameters
elt_typeData type of the tensor elements.
shapeTensor shape.
alignmentPower of 2 alignment to include in calculation. Bumps up result to the nearest multiple of alignment. Set to 0 to ignore.
storage_sizeThe resulting storage size.
Returns
Status indicating success or failure.
template<typename T >
const T* onnxruntime::Tensor::Data ( ) const
inline

Definition at line 218 of file tensor.h.

template<typename T >
gsl::span<const T> onnxruntime::Tensor::DataAsSpan ( ) const
inline

Definition at line 226 of file tensor.h.

const void* onnxruntime::Tensor::DataRaw ( MLDataType  type) const
inline

Definition at line 239 of file tensor.h.

const void* onnxruntime::Tensor::DataRaw ( ) const
inlinenoexcept

Definition at line 248 of file tensor.h.

MLDataType onnxruntime::Tensor::DataType ( ) const
inline

Returns the data type.

Definition at line 162 of file tensor.h.

int32_t onnxruntime::Tensor::GetElementType ( ) const
inline

Returns the data type enum constant

Remarks
Use utils::ToTensorProtoElementType<T> for comparison.

Definition at line 168 of file tensor.h.

static void onnxruntime::Tensor::InitOrtValue ( MLDataType  elt_type,
const TensorShape shape,
void p_data,
const OrtMemoryInfo location,
OrtValue ort_value,
ptrdiff_t  offset = 0,
gsl::span< const int64_t >  strides = {} 
)
static

Creates an instance of Tensor on the heap and initializes OrtValue with it.

Parameters
elt_typeData type of the tensor elements.
shapeTensor shape.
p_dataTensor data.
locationMemory info for location of p_data.
ort_valueOrtValue to populate with Tensor.
offsetOptional offset if Tensor refers to a subset of p_data.
stridesOptional strides if Tensor refers to a subset of p_data.
static void onnxruntime::Tensor::InitOrtValue ( MLDataType  elt_type,
const TensorShape shape,
void p_data,
std::shared_ptr< IAllocator allocator,
OrtValue ort_value,
ptrdiff_t  offset = 0,
gsl::span< const int64_t >  strides = {} 
)
static

Creates an instance of Tensor on the heap which will take over ownership of the pre-allocated buffer.

Parameters
elt_typeData type of the tensor elements.

<param name="shape"Tensor shape.

Parameters
p_dataTensor data.
allocatorAllocator that was used to create p_data and will be used to free it.
ort_valueOrtValue to populate with Tensor.
offsetOptional offset if Tensor refers to a subset of p_data.
stridesOptional strides if Tensor refers to a subset of p_data.
static void onnxruntime::Tensor::InitOrtValue ( MLDataType  elt_type,
const TensorShape shape,
std::shared_ptr< IAllocator allocator,
OrtValue ort_value 
)
static

Creates an instance of Tensor on the heap and initializes OrtValue with it. The Tensor instance will allocate and own the data required for shape.

Parameters
elt_typeData type of the tensor elements.
shapeTensor shape.
allocatorAllocator that was used to create p_data and will be used to free it.
ort_valueOrtValue to populate with Tensor.
static void onnxruntime::Tensor::InitOrtValue ( Tensor &&  tensor,
OrtValue ort_value 
)
static

Initializes OrtValue with an existing Tensor.

Parameters
tensorTensor.
ort_valueOrtValue to populate with Tensor.
template<class T >
bool onnxruntime::Tensor::IsDataType ( ) const
inline

Definition at line 180 of file tensor.h.

bool onnxruntime::Tensor::IsDataTypeString ( ) const
inline

Definition at line 174 of file tensor.h.

const OrtMemoryInfo& onnxruntime::Tensor::Location ( ) const
inline

Returns the location of the tensor's memory

Definition at line 192 of file tensor.h.

template<typename T >
T* onnxruntime::Tensor::MutableData ( )
inline

May return nullptr if tensor size is zero

Definition at line 198 of file tensor.h.

template<typename T >
gsl::span<T> onnxruntime::Tensor::MutableDataAsSpan ( )
inline

May return nullptr if tensor size is zero

Definition at line 209 of file tensor.h.

void* onnxruntime::Tensor::MutableDataRaw ( MLDataType  type)
inline

Definition at line 234 of file tensor.h.

void* onnxruntime::Tensor::MutableDataRaw ( )
inlinenoexcept

Definition at line 244 of file tensor.h.

int64_t onnxruntime::Tensor::NumStorageElements ( ) const

The number of Tensor "storage" elements. A single storage element may contain multiple sub-elements for sub-byte data types (e.g., int4).

For element types smaller than 1 byte (e.g., int4), a single storage element stores multiple sub-byte elements. Example: Tensor<int4> of shape (4,) has 2 storage elements.

For element types >= 1 byte, this function returns the product of the shape. Example: Tensor<int8> of shape (4,) has 4 storage elements.

Returns
Number of tensor storage elements
Tensor& onnxruntime::Tensor::operator= ( Tensor &&  other)
noexcept
onnxruntime::Tensor::ORT_DISALLOW_COPY_AND_ASSIGNMENT ( Tensor  )
bool onnxruntime::Tensor::OwnsBuffer ( ) const
inlinenoexcept

Definition at line 252 of file tensor.h.

void onnxruntime::Tensor::Reshape ( const TensorShape new_shape)
inline

Resizes the tensor without touching underlying storage. This requires the total size of the tensor to remains constant.

Warning
this function is NOT thread-safe.

Definition at line 261 of file tensor.h.

void onnxruntime::Tensor::SetByteOffset ( ptrdiff_t  byte_offset)
inline

Set the byte offset with respect to the p_data

Warning
this is a temporary solution for reusing the buffer bigger than needed.

Definition at line 281 of file tensor.h.

const TensorShape& onnxruntime::Tensor::Shape ( ) const
inlinenoexcept

Returns the shape of the tensor.

Definition at line 187 of file tensor.h.

size_t onnxruntime::Tensor::SizeInBytes ( ) const

The number of bytes of data.


The documentation for this class was generated from the following file: