Fork me on GitHub
Arraymancer Technical reference Tutorial Spellbook (How-To's) Under the hood

Module data_structure

Types

CpuStorage[T] = object
  Fdata*: seq[T]
Opaque data storage for Tensors Currently implemented as a seq with reference semantics (shallow copy on assignment). It may change in the future for a custom memory managed and 64 bit aligned solution.
Warning ⚠:
Do not use Fdata directly, direct access will be removed in 0.4.0.
  Source Edit
Tensor[T] = object
  shape*: MetadataArray
  strides*: MetadataArray
  offset*: int
  storage*: CpuStorage[T]
Tensor data structure stored on Cpu
  • shape: Dimensions of the tensor
  • strides: Numbers of items to skip to get the next item along a dimension.
  • offset: Offset to get the first item of the tensor. Note: offset can be negative, in particular for slices.
  • storage: An opaque data storage for the tensor

Fields are public so that external libraries can easily construct a Tensor. You can use .data to access the opaque data storage.

Warning ⚠:
Assignment ```var a = b``` does not copy the data. Data modification on one tensor will be reflected on the other. However modification on metadata (shape, strides or offset) will not affect the other tensor. Explicit copies can be made with clone: ```var a = b.clone```
  Source Edit
CudaStorage[T] = object
  Flen*: int
  Fdata*: ptr UncheckedArray[T]
  Fref_tracking*: ref [ptr UncheckedArray[T]]

Opaque seq-like structure for storage on the Cuda backend.

Nim garbage collector will automatically ask cuda to clear GPU memory if data becomes unused.

  Source Edit
CudaTensor[T] = object
  shape*: MetadataArray
  strides*: MetadataArray
  offset*: int
  storage*: CudaStorage[T]
Tensor data structure stored on Nvidia GPU (Cuda)
  • shape: Dimensions of the CudaTensor
  • strides: Numbers of items to skip to get the next item along a dimension.
  • offset: Offset to get the first item of the CudaTensor. Note: offset can be negative, in particular for slices.
  • storage: An opaque data storage for the CudaTensor
Warning ⚠:
Assignment ```var a = b``` does not copy the data. Data modification on one CudaTensor will be reflected on the other. However modification on metadata (shape, strides or offset) will not affect the other tensor. Explicit copies can be made with clone: ```var a = b.clone```
  Source Edit
ClStorage[T] = object
  Flen*: int
  Fdata*: ptr UncheckedArray[T]
  Fref_tracking*: ref [ptr UncheckedArray[T]]
Opaque seq-like structure for storage on the OpenCL backend.   Source Edit
ClTensor[T] = object
  shape*: MetadataArray
  strides*: MetadataArray
  offset*: int
  storage*: ClStorage[T]
Tensor data structure stored on OpenCL (CPU, GPU, FPGAs or other accelerators)
  • shape: Dimensions of the CudaTensor
  • strides: Numbers of items to skip to get the next item along a dimension.
  • offset: Offset to get the first item of the CudaTensor. Note: offset can be negative, in particular for slices.
  • storage: An opaque data storage for the CudaTensor
Warning ⚠:
Assignment ```var a = b``` does not copy the data. Data modification on one CudaTensor will be reflected on the other. However modification on metadata (shape, strides or offset) will not affect the other tensor. Explicit copies can be made with clone: ```var a = b.clone```
  Source Edit
AnyTensor[T] = Tensor[T] or CudaTensor[T] or ClTensor[T]
  Source Edit

Procs

proc data[T](t: Tensor[T]): seq[T] {.
inline, noSideEffect, noInit
.}
  Source Edit
proc data[T](t: var Tensor[T]): var seq[T] {.
inline, noSideEffect, noInit
.}
  Source Edit
proc data=[T](t: var Tensor[T]; s: seq[T]) {.
inline, noSideEffect
.}
  Source Edit
proc rank(t: AnyTensor): int {.
noSideEffect, inline
.}
Input:
  • A tensor
Returns:
  • Its rank
  • 0 for scalar (unfortunately cannot be stored)
  • 1 for vector
  • 2 for matrices
  • N for N-dimension array
  Source Edit
proc size(t: AnyTensor): int {.
noSideEffect, inline
.}
Input:
  • A tensor
Returns:
  • The total number of elements it contains
  Source Edit
proc shape_to_strides(shape: MetadataArray; layout: OrderType = rowMajor;
                     result: var MetadataArray) {.
noSideEffect, raises: [], tags: []
.}
Input:
  • A shape (MetadataArray), for example [3,5] for a 3x5 matrix
  • Optionally rowMajor (C layout - default) or colMajor (Fortran)
Returns:
  • The strides in C or Fortran order corresponding to this shape and layout

 Arraymancer defaults to rowMajor. Temporarily, CudaTensors are colMajor by default.

  Source Edit
proc is_C_contiguous(t: AnyTensor): bool {.
noSideEffect, inline
.}
Check if the tensor follows C convention / is row major   Source Edit
proc is_F_contiguous(t: AnyTensor): bool {.
noSideEffect, inline
.}
Check if the tensor follows Fortran convention / is column major   Source Edit
proc isContiguous(t: AnyTensor): bool {.
noSideEffect, inline
.}
Check if the tensor is contiguous   Source Edit
proc get_data_ptr[T](t: AnyTensor[T]): ptr T {.
noSideEffect, inline
.}
Input:
  • A tensor
Returns:
  • A pointer to the real start of its data (no offset)
  Source Edit
proc get_offset_ptr[T](t: AnyTensor[T]): ptr T {.
noSideEffect, inline
.}
Input:
  • A tensor
Returns:
  • A pointer to the offset start of its data
  Source Edit
proc dataArray[T](t: Tensor[T]): ptr UncheckedArray[T] {.
noSideEffect, inline
.}
Input:
  • A tensor
Returns:
  • A pointer to the offset start of the data. Return value supports array indexing.
  Source Edit