Fork me on GitHub
Arraymancer Technical reference Tutorial Spellbook (How-To's) Under the hood

conv2D

Search:
Group by:

Types

Conv2DGate {...}{.final.}[TT] = ref object of Gate[TT]
  cached_input: Variable[TT]
  weight, bias: Variable[TT]
  padding, stride: Size2D
  Source Edit

Procs

proc conv2d[TT](input, weight: Variable[TT]; bias: Variable[TT] = nil;
               padding: Size2D = (0, 0); stride: Size2D = (1, 1)): Variable[TT]
Input:
  • input Variable wrapping a 4D Tensor batch of images of the size [N,C_in,H_in,W_in]
  • weight Variable wrapping a 4D Tensor convolving kernel weights of the size [C_out,C_in,kH,kW]
  • bias Nil-able Variable wrapping a 3D Tensor bias of the size [C_out,1,1]
  • padding Size2D tuple with height and width of the padding
  • stride Size2D tuple with height and width of the stride
Returns:
  • A variable with a convolved 4D Tensor of size [N,C_out,H_out,W_out], where
    H_out = (H_in + (2*padding.height) - kH) / stride.height + 1 W_out = (W_in + (2*padding.width) - kW) / stride.width + 1
Future TODO:
In the future the conv2D layer will allow different input layout
Warning ⚠:
  • Experimental, there is no tests yet for this layer
  Source Edit