Fork me on GitHub

src/arraymancer/nn_primitives/nnp_conv2d_cudnn

  Source Edit

Procs

proc conv2d[T: SomeFloat](input, kernel, bias: CudaTensor[T];
                          padding: SizeHW = [0, 0];
                          strides, dilation: SizeHW = [1, 1]): CudaTensor[T] {.
    noinit.}
Input:
- ``input`` 4D Tensor batch of images of the size [N,C_in,H_in,W_in]
- ``kernel`` 4D Tensor convolving kernel filters of the size [C_out,C_in,kH,kW]
- ``bias`` 3D Tensor bias of the size [C_out,1,1]
  Source Edit
proc conv2d_backward[T: SomeFloat](input, kernel, bias: CudaTensor[T];
                                   padding: SizeHW = [0, 0];
                                   strides, dilation: SizeHW = [1, 1];
                                   grad_output: CudaTensor[T]; grad_input,
    grad_kernel, grad_bias: var CudaTensor[T])

Computes gradients of a 2D convolution. Intended to be used after conv2d to calculate gradients in backward pass.

Input:

- ``input`` 4D Tensor batch of images of the size [N,C_in,H_in,W_in]
- ``kernel`` 4D Tensor convolving kernel weights of the size [C_out,C_in,kH,kW]
- ``bias`` 3D Tensor bias of the size [C_out,1,1] or an empty tensor for no bias
- ``padding`` SizeHW tuple with height and width of the padding
- ``strides`` SizeHW tuple with height and width of the convolution strides
- ``dilation`` SizeHW tuple with a rescaling factor of the convolution
- ``grad_output`` 4D tensor gradient of the next layer of the size [N,C_out,H_out,W_out]
- ``grad_input`` tensor where the gradient w.r.t input will be written
- ``grad_kernel`` tensor where the gradient w.r.t convolution kernel will be written
- ``grad_bias`` tensor where the gradient w.r.t bias will be written

Note: grad_input, grad_kernel and grad_bias will be overwritten. They must have the same shape as the corresponding input, kernel and bias

  Source Edit
Arraymancer Technical reference Tutorial Spellbook (How-To's) Under the hood