Fork me on GitHub

src/arraymancer/nn/layers/linear

  Source Edit

Types

Linear[T] = object
  weight*: Variable[Tensor[T]]
  bias*: Variable[Tensor[T]]
  Source Edit
LinearGate[TT] {.final.} = ref object of Gate[TT]
  
TODO: use fused AddMatMul gate: C <- alpha AB + beta C   Source Edit

Procs

proc forward[T](self: Linear[T]; input: Variable[Tensor[T]]): Variable[Tensor[T]]
  Source Edit
proc init[T](ctx: Context[Tensor[T]]; layerType: typedesc[Linear[T]];
             numInput, numOutput: int): Linear[T]
Initializes a linear layer with numInput input features and numOutput output features. Using Kaiming He initialisation for weights to provide decent performance in most cases. Biases are usually set to zero.   Source Edit
func inShape[T](self: Linear[T]): seq[int]
  Source Edit
proc linear[TT](input, weight: Variable[TT]; bias: Variable[TT] = nil): Variable[
    TT]
Input:

Return:

  • Weight * x + bias

Future TODO: In the future the linear layer will allow different input layout so that x can also be of shape batch_size, in_features

Warning âš :

  • Experimental, there is no tests yet for this layer
  Source Edit
func outShape[T](self: Linear[T]): seq[int]
  Source Edit
Arraymancer Technical reference Tutorial Spellbook (How-To's) Under the hood