Fork me on GitHub
Arraymancer Technical reference Tutorial Spellbook (How-To's) Under the hood

Module shapeshifting_cuda

Procs

proc transpose(t: CudaTensor): CudaTensor {.
noSideEffect
.}

Transpose a Tensor.

For N-d Tensor with shape (0, 1, 2 ... n-1) the resulting tensor will have shape (n-1, ... 2, 1, 0)

  Source Edit
proc asContiguous[T: SomeReal](t: CudaTensor[T]; layout: OrderType = colMajor;
                             force: bool = false): CudaTensor[T] {.
noSideEffect
.}

Transform a tensor with general striding to a Tensor with contiguous layout.

By default CudaTensor will be colMajor (contrary to a cpu tensor).

By default nothing is done if the tensor is already contiguous (C Major or F major) The "force" parameter can force re-ordering to a specific layout

  Source Edit
proc reshape(t: CudaTensor; new_shape: varargs[int]): CudaTensor

Reshape a CudaTensor without copy.

⚠ Reshaping without copy is only possible on contiguous Tensors

  Source Edit
proc broadcast(t: CudaTensor; shape: varargs[int]): CudaTensor {.
noSideEffect
.}

Explicitly broadcast a CudaTensor to the specified shape. The returned broadcasted CudaTensor share the underlying data with the input.

Dimension(s) of size 1 can be expanded to arbitrary size by replicating values along that dimension.

Warning ⚠:
This is a no-copy operation, data is shared with the input. This proc does not guarantee that a let value is immutable. A broadcasted tensor should not be modified and only used for computation.
  Source Edit
proc broadcast(t: CudaTensor; shape: MetadataArray): CudaTensor {.
noSideEffect
.}

Explicitly broadcast a CudaTensor to the specified shape. The returned broadcasted CudaTensor share the underlying data with the input.

Dimension(s) of size 1 can be expanded to arbitrary size by replicating values along that dimension.

Warning ⚠:
This is a no-copy operation, data is shared with the input. This proc does not guarantee that a let value is immutable. A broadcasted tensor should not be modified and only used for computation.
  Source Edit
proc broadcast2[T](a, b: CudaTensor[T]): tuple[a, b: CudaTensor[T]] {.
noSideEffect
.}

Broadcast 2 tensors so they have compatible shapes for element-wise computations.

Tensors in the tuple can be accessed with output.a and output.b

The returned broadcasted Tensors share the underlying data with the input.

Dimension(s) of size 1 can be expanded to arbitrary size by replicating values along that dimension.

Warning ⚠:
This is a no-copy operation, data is shared with the input. This proc does not guarantee that a let value is immutable. A broadcasted tensor should not be modified and only used for computation.
  Source Edit
proc squeeze(t: CudaTensor; axis: int): CudaTensor {.
noSideEffect
.}
Collapse the given axis, if the dimension is not 1; it does nothing
Input:
  • a CudaTensor
  • an axis (dimension)
Returns:
  • a CudaTensor with singleton dimensions collapsed
Warning ⚠:
This is a no-copy operation, data is shared with the input. This proc does not guarantee that a let value is immutable.
  Source Edit
proc unsqueeze(t: CudaTensor; axis: int): CudaTensor {.
noSideEffect
.}
Insert a new axis just before the given axis, increasing the CudaTensor
dimension (rank) by 1
  • a tensor with that new axis
Warning ⚠:
This is a no-copy operation, data is shared with the input. This proc does not guarantee that a let value is immutable.
  Source Edit