Fork me on GitHub
Arraymancer Technical reference Tutorial Spellbook (How-To's) Under the hood

shapeshifting

Procs

proc transpose(t: Tensor): Tensor {...}{.noInit, noSideEffect, inline.}

Transpose a Tensor.

For N-d Tensor with shape (0, 1, 2 ... n-1) the resulting tensor will have shape (n-1, ... 2, 1, 0)

Data is not copied or modified, only metadata is modified.

  Source Edit
proc asContiguous[T](t: Tensor[T]; layout: OrderType = rowMajor; force: bool = false): Tensor[
    T] {...}{.noInit.}

Transform a tensor with general striding to a Tensor with contiguous layout.

By default tensor will be rowMajor.

The layout is kept if the tensor is already contiguous (C Major or F major) The "force" parameter can force re-ordering to a specific layout.

Result is always a fully packed tensor even if the input is a contiguous slice.

  Source Edit
proc reshape(t: Tensor; new_shape: varargs[int]): Tensor {...}{.noInit.}
Reshape a tensor. If possible no data copy is done and the returned tensor shares data with the input. If input is not contiguous, this is not possible and a copy will be made.
Input:
  • a tensor
  • a new shape. Number of elements must be the same
Returns:
  • a tensor with the same data but reshaped.
  Source Edit
proc reshape(t: Tensor; new_shape: MetadataArray): Tensor {...}{.noInit.}
Reshape a tensor. If possible no data copy is done and the returned tensor shares data with the input. If input is not contiguous, this is not possible and a copy will be made.
Input:
  • a tensor
  • a new shape. Number of elements must be the same
Returns:
  • a tensor with the same data but reshaped.
  Source Edit
proc broadcast[T](t: Tensor[T]; shape: varargs[int]): Tensor[T] {...}{.noInit, noSideEffect.}

Explicitly broadcast a tensor to the specified shape.

Dimension(s) of size 1 can be expanded to arbitrary size by replicating values along that dimension.

Warning ⚠:
A broadcasted tensor should not be modified and only used for computation.
  Source Edit
proc broadcast[T](t: Tensor[T]; shape: MetadataArray): Tensor[T] {...}{.noInit, noSideEffect.}

Explicitly broadcast a tensor to the specified shape.

Dimension(s) of size 1 can be expanded to arbitrary size by replicating values along that dimension.

Warning ⚠:
A broadcasted tensor should not be modified and only used for computation.
  Source Edit
proc broadcast[T: SomeNumber](val: T; shape: varargs[int]): Tensor[T] {...}{.noInit,
    noSideEffect.}
Broadcast a number
Input:
  • a number to be broadcasted
  • a tensor shape that will be broadcasted to
Returns:
  • a tensor with the broadcasted shape where all elements has the broadcasted value

The broadcasting is made using tensor data of size 1 and 0 strides, i.e. the operation is memory efficient.

Warning ⚠:
A broadcasted tensor should not be modified and only used for computation. Modifying any value from this broadcasted tensor will change all its values.
  Source Edit
proc broadcast[T: SomeNumber](val: T; shape: MetadataArray): Tensor[T] {...}{.noInit,
    noSideEffect.}
Broadcast a number
Input:
  • a number to be broadcasted
  • a tensor shape that will be broadcasted to
Returns:
  • a tensor with the broadcasted shape where all elements has the broadcasted value

The broadcasting is made using tensor data of size 1 and 0 strides, i.e. the operation is memory efficient.

Warning ⚠:
A broadcasted tensor should not be modified and only used for computation. Modifying any value from this broadcasted tensor will change all its values.
  Source Edit
proc broadcast2[T](a, b: Tensor[T]): tuple[a, b: Tensor[T]] {...}{.noSideEffect, noInit.}

Broadcast 2 tensors so they have compatible shapes for element-wise computations.

Tensors in the tuple can be accessed with output.a and output.b

The returned broadcasted Tensors share the underlying data with the input.

Dimension(s) of size 1 can be expanded to arbitrary size by replicating values along that dimension.

Warning ⚠:
This is a no-copy operation, data is shared with the input. This proc does not guarantee that a let value is immutable. A broadcasted tensor should not be modified and only used for computation.
  Source Edit
proc permute(t: Tensor; dims: varargs[int]): Tensor {...}{.noInit, noSideEffect.}
Permute dimensions of a tensors
Input:
  • a tensor
  • the new dimension order
Returns:
  • a tensor with re-order dimension
Usage:
a.permute(0,2,1) # dim 0 stays at 0, dim 1 becomes dim 2 and dim 2 becomes dim 1
  Source Edit
proc concat[T](t_list: varargs[Tensor[T]]; axis: int): Tensor[T] {...}{.noInit.}
Concatenate tensors
Input:
  • Tensors
  • An axis (dimension)
Returns:
  • a tensor
  Source Edit
proc stack[T](tensors: varargs[Tensor[T]]; axis: Natural = 0): Tensor[T] {...}{.noInit.}
Join a sequence of tensors along a new axis into a new tensor.
Input:
  • a tensor
  • an axis (dimension)
Returns:
  • a new stacked tensor along the new axis
  Source Edit

Funcs

func squeeze(t: AnyTensor): AnyTensor {...}{.noInit.}
Squeeze tensors. For example a Tensor of shape [4,1,3] will become [4,3]
Input:
  • a tensor
Returns:
  • a tensor with singleton dimensions collapsed
  Source Edit
func squeeze(t: Tensor; axis: Natural): Tensor {...}{.noInit.}
Collapse the given axis, if the dimension is not 1, it does nothing.
Input:
  • a tensor
  • an axis (dimension)
Returns:
  • a tensor with that axis collapsed, if it was a singleton dimension
  Source Edit
func unsqueeze(t: Tensor; axis: Natural): Tensor {...}{.noInit.}
Insert a new axis just before the given axis, increasing the tensor dimension (rank) by 1
Input:
  • a tensor
  • an axis (dimension)
Returns:
  • a tensor with that new axis
  Source Edit
func split[T](t: Tensor[T]; chunk_size: Positive; axis: Natural): seq[Tensor[T]] {...}{.
    noInit.}
Split the tensor into chunks of size chunk_size along the specified axis. Last chunk size will equal the remainder if the specified axis length is not divisible by chunk_size   Source Edit
func chunk[T](t: Tensor[T]; nb_chunks: Positive; axis: Natural): seq[Tensor[T]] {...}{.noInit.}

Splits a Tensor into n chunks along the specified axis.

In case a tensor cannot be split evenly, with la == length_axis, n = n_chunks

it returns la mod n subtensors of size (la div n) + 1
the rest of size la div n.

This is consistent with numpy array_split

  Source Edit
func index_select[T; Idx: byte or char or SomeNumber](t: Tensor[T]; axis: int;
    indices: Tensor[Idx]): Tensor[T]
Take elements from a tensor along an axis using the indices Tensor. This is equivalent to NumPy take. The result does not share the input storage, there are copies. The tensors containing the indices can be an integer, byte or char tensor.   Source Edit

Templates

template bc(t: (Tensor | SomeNumber); shape: varargs[int]): untyped
Alias for broadcast   Source Edit
template bc(t: (Tensor | SomeNumber); shape: MetadataArray): untyped
Alias for broadcast   Source Edit