Procs
proc append[T](t: Tensor[T]; values: Tensor[T]): Tensor[T] {.noinit.}
-
Create a copy of an rank-1 input tensor with values appended to its end
Inputs:
- Rank-1 tensor
- Rank-1 tensor of extra values to append
Returns:
- A copy of the input tensor t with the extra values appended at the end.
Notes: Append does not occur in-place (a new tensor is allocated and filled). To concatenate more than one tensor or tensors that you must use concat. Compared to numpy's append, this proc requires that you explicitly flatten the inputs if they are not rank-1 tensors. It also does not support the axis parameter. If you want to append the values along a specific axis, you should use concat instead. Examples:
Source Editecho append(1, 2, 3.toTensor, 4, 5, 6, 7.toTensor) # Tensorsystem.int of shape "9" on backend "Cpu" # 1 2 3 4 5 6 7
echo append(1, 2, 3.toTensor, [4, 5, 6, 7, 8, 9].toTensor) # Error: unhandled exception: values.rank == 1 append only works # on rank-1 tensors but extra values tensor has rank 2 AssertionDefect
echo append(1, 2, 3.toTensor, [4, 5, 6, 7, 8, 9].toTensor.flatten) # 1 2 3 4 5 6 7 8 9
proc append[T](t: Tensor[T]; values: varargs[T]): Tensor[T] {.noinit.}
-
Create a copy of an rank-1 input tensor with one or more values appended to its end
Inputs:
- Rank-1 tensor of type T
- An open array or a list of values of type T
Returns:
- A copy of the input tensor t with the extra values appended at the end.
Notes: Append does not occur in-place (a new tensor is allocated and filled). Compared to numpy's append, this proc requires that you explicitly flatten the input tensor if its rank is greater than 1. It also does not support the axis parameter. If you want to append values along a specific axis, you should use concat instead. Examples:
- Append a single value
Source Editecho append(1, 2, 3.toTensor, 4) # Tensorsystem.int of shape "9" on backend "Cpu"
1 2 3 4
- Append a multiple values echo append(1, 2, 3.toTensor, 4, 5, 6, 7) # Tensorsystem.int of shape "9" on backend "Cpu"
1 2 3 4 5 6 7
- Append an openArray of values echo append(1, 2, 3.toTensor, 4, 5, 6, 7) # Tensorsystem.int of shape "9" on backend "Cpu"
1 2 3 4 5 6 7
- Only rank-1 tensors are supported echo append([1, 2, 3, 4, 5, 6].toTensor, 7, 8, 9) # Error: unhandled exception: t.rank == 1 append only works
on rank-1 tensors but first input tensor has rank 2 AssertionDefect
proc asContiguous[T](t: Tensor[T]; layout: OrderType = rowMajor; force: bool = false): Tensor[T] {.noinit.}
-
Transform a tensor with general striding to a Tensor with contiguous layout.
By default tensor will be rowMajor.
The layout is kept if the tensor is already contiguous (C Major or F major) The "force" parameter can force re-ordering to a specific layout.
Result is always a fully packed tensor even if the input is a contiguous slice.
Source Edit proc broadcast[T: SomeNumber](val: T; shape: Metadata): Tensor[T] {.noinit, noSideEffect.}
-
Broadcast a number
Input:
- a number to be broadcasted
- a tensor shape that will be broadcasted to
Returns:
- a tensor with the broadcasted shape where all elements has the broadcasted value
The broadcasting is made using tensor data of size 1 and 0 strides, i.e. the operation is memory efficient.
Warning âš : A broadcasted tensor should not be modified and only used for computation. Modifying any value from this broadcasted tensor will change all its values.
Source Edit proc broadcast[T: SomeNumber](val: T; shape: varargs[int]): Tensor[T] {.noinit.}
-
Broadcast a number
Input:
- a number to be broadcasted
- a tensor shape that will be broadcasted to
Returns:
- a tensor with the broadcasted shape where all elements has the broadcasted value
The broadcasting is made using tensor data of size 1 and 0 strides, i.e. the operation is memory efficient.
Warning âš : A broadcasted tensor should not be modified and only used for computation. Modifying any value from this broadcasted tensor will change all its values.
Source Edit proc broadcast[T](t: Tensor[T]; shape: Metadata): Tensor[T] {.noinit, noSideEffect.}
-
Explicitly broadcast a tensor to the specified shape.
Dimension(s) of size 1 can be expanded to arbitrary size by replicating values along that dimension.
Warning âš : A broadcasted tensor should not be modified and only used for computation.
Source Edit proc broadcast[T](t: Tensor[T]; shape: varargs[int]): Tensor[T] {.noinit, noSideEffect.}
-
Explicitly broadcast a tensor to the specified shape.
Dimension(s) of size 1 can be expanded to arbitrary size by replicating values along that dimension.
Warning âš : A broadcasted tensor should not be modified and only used for computation.
Source Edit proc broadcast2[T](a, b: Tensor[T]): tuple[a, b: Tensor[T]] {.noSideEffect, noinit.}
-
Broadcast 2 tensors so they have compatible shapes for element-wise computations.
Tensors in the tuple can be accessed with output.a and output.b
The returned broadcasted Tensors share the underlying data with the input.
Dimension(s) of size 1 can be expanded to arbitrary size by replicating values along that dimension.
Warning âš : This is a no-copy operation, data is shared with the input. This proc does not guarantee that a let value is immutable. A broadcasted tensor should not be modified and only used for computation.
Source Edit func chunk[T](t: Tensor[T]; nb_chunks: Positive; axis: Natural): seq[Tensor[T]] {. noinit.}
-
Splits a Tensor into n chunks along the specified axis.
In case a tensor cannot be split evenly, with la == length_axis, n = n_chunks it returns la mod n subtensors of size (la div n) + 1 the rest of size la div n.
This is consistent with numpy array_split
Source Edit proc flatten(t: Tensor): Tensor {.noinit, inline.}
-
Flatten a tensor, returning a rank-1 tensor with the same data as the input.
This is the same as t.reshape([t.size.int]). Therefore, if possible no data copy is done and the returned tensor shares data with the input. If input is not contiguous, this is not possible and a copy will be made.
Input:
- a tensor
Returns:
- a tensor rank-1 tensor with the same data as the input.
proc moveaxis(t: Tensor; initial: Natural; target: Natural): Tensor {.noinit.}
-
Move one of the axes of a tensor into a new position Input:
- a tensor
- the initial position of the axes to move
- the target position of the axes to move
Returns:
- a tensor with moved axes but sharing the same data
See also:
- permute
Usage: .. code:: nim # move dim 0 to position 2, which makes # dim 1 become dim 0 and dim 2 become dim 1 a.moveaxis(0, 2) Notes: Call .clone() if you want to make a copy of the data, otherwise changes to the data of returned tensor will affect the input tensor.
Source Edit proc permute(t: Tensor; dims: varargs[int]): Tensor {.noinit, noSideEffect.}
-
Permute the dimensions of a tensor into a different order Input:
- a tensor
- the new dimension order
Returns:
- a tensor with re-ordered dimensions but sharing the same data
See also:
- moveaxis
Usage: .. code:: nim # keep dim 0 at position 0 and swap dims 1 and 2 a.permute(0,2,1) Notes: Call .clone() if you want to make a copy of the data, otherwise changes to the data of returned tensor will affect the input tensor.
Source Edit proc reshape(t: Tensor; new_shape: Metadata): Tensor {.noinit.}
-
Reshape a tensor. If possible no data copy is done and the returned tensor shares data with the input. If input is not contiguous, this is not possible and a copy will be made.
Input:
- a tensor
- a new shape. Number of elements must be the same
Returns:
- a tensor with the same data but reshaped.
proc reshape(t: Tensor; new_shape: varargs[int]): Tensor {.noinit.}
-
Reshape a tensor. If possible no data copy is done and the returned tensor shares data with the input. If input is not contiguous, this is not possible and a copy will be made.
Input:
- a tensor
- a new shape. Number of elements must be the same. Unlike numpy, dimensions cannot be -1 to infer their value. If that is what you need you must use the alternative reshape_infer proc.
Returns:
- a tensor with the same data but reshaped.
proc reshape_infer(t: Tensor; new_shape: varargs[int]): Tensor {.noinit.}
-
Reshape a tensor. If possible no data copy is done and the returned tensor shares data with the input. If input is not contiguous, this is not possible and a copy will be made.
Input:
- a tensor
- a new shape. Number of elements must be the same. The new shape can contain -1 to infer the size of one (and only one) dimension
Returns:
- a tensor with the same data but reshaped.
proc roll[T](t: Tensor[T]; shift: int): Tensor[T] {.noinit.}
-
Roll elements of tensor "globally" (i.e. across all axes).
This takes a tensor, flattens it, rolls the elements shift positions (taking the last shift elements of the flattened tensor and putting them at the beginning of the flattened tensor), and then reshapes the rolled tensor back to the original shape.
This is different from the version of this proc that accepts an axis, which rolls _slices of a tensor taken along the selected axis.
Input:
- t: Input tensor.
- shift: Integer number of places by which elements are shifted.
Return:
- Output tensor, with the same shape as a.
Examples:
Source Editlet x = arange(5) echo x.roll(2) Tensorsystem.int of shape "5" on backend "Cpu" 3 4 0 1 2 echo x.roll(-2) Tensorsystem.int of shape "5" on backend "Cpu" 2 3 4 0 1 let x2 = arange(5).reshape(2, 5) echo x2 # Tensorsystem.int of shape "2, 5" on backend "Cpu" # |0 1 2 3 4| # |5 6 7 8 9| echo roll(x2, 1) # Tensorsystem.int of shape "2, 5" on backend "Cpu" # |9 0 1 2 3| # |4 5 6 7 8| echo roll(x2, -1) # Tensorsystem.int of shape "2, 5" on backend "Cpu" # |1 2 3 4 5| # |6 7 8 9 0|
proc roll[T](t: Tensor[T]; shift: int; axis: Natural): Tensor[T] {.noinit.}
-
Roll slices of a tensor along a given axis.
Slices that roll beyond the last position are re-introduced at the first.
Note that calling this proc with a rank-1 tensor, will simply check that axis == 0 and then call the (axis-less) version of this proc.
Input:
- t : Input tensor.
- shift : Integer number of places by which elements are shifted.
- axis : an axis (dimension).
Return:
- Output tensor, with the same shape as t.
Notes:
- numpy's roll also supports passing a list of shifts and axis, while this proc doesn't. However, you can achieve the same effect by calling roll multiple times in a row (i.e. np.roll(t, [1, 2], axis=[0, 1]) is equivalent to t.roll(1, axis=0).roll(2, axis=1) which is arguably more clear).
Examples:
let x = arange(5) echo x.roll(2, axis=0) Tensorsystem.int of shape "5" on backend "Cpu" 3 4 0 1 2 echo x.roll(-2, axis=0) Tensorsystem.int of shape "5" on backend "Cpu" 2 3 4 0 1
Source Editlet x2 = arange(5).reshape(2, 5) echo x2 # Tensorsystem.int of shape "2, 5" on backend "Cpu" # |0 1 2 3 4| # |5 6 7 8 9| echo roll(x2, 1, axis=0) # Tensorsystem.int of shape "2, 5" on backend "Cpu" # |5 6 7 8 9| # |0 1 2 3 4| echo roll(x2, -1, axis=0) # Tensorsystem.int of shape "2, 5" on backend "Cpu" # |5 6 7 8 9| # |0 1 2 3 4| echo roll(x2, 1, axis=1) # Tensorsystem.int of shape "2, 5" on backend "Cpu" # |4 0 1 2 3| # |9 5 6 7 8| echo roll(x2, -1, axis=1) # Tensorsystem.int of shape "2, 5" on backend "Cpu" # |1 2 3 4 0| # |6 7 8 9 5| echo x2.roll(1, axis=0).roll(2, axis=1) Tensorsystem.int of shape "2, 5" on backend "Cpu"|8 9 5 6 7| |3 4 0 1 2|