Fork me on GitHub

src/arraymancer/nn_primitives/nnp_activation

  Source Edit

Procs

proc mrelu[T](t: var Tensor[T])
  Source Edit
proc msigmoid[T: SomeFloat](t: var Tensor[T])
Logistic sigmoid activation function, f(x) = 1 / (1 + \exp(-x)) Note: Canonical sigmoid is not stable for large negative value   Source Edit
proc mtanh[T: SomeFloat](t: var Tensor[T])
  Source Edit
proc relu[T](t: Tensor[T]): Tensor[T] {.noinit.}
  Source Edit
proc relu_backward[T](gradient: Tensor[T]; cached_tensor: Tensor[T]): Tensor[T] {.
    noinit.}
  Source Edit
proc sigmoid[T: SomeFloat](t: Tensor[T]): Tensor[T] {.noinit.}
Logistic sigmoid activation function, f(x) = 1 / (1 + \exp(-x)) Note: Canonical sigmoid is not stable for large negative value Please use sigmoid_cross_entropy for the final layer for better stability and performance   Source Edit
proc sigmoid_backward[T](gradient: Tensor[T]; cached_tensor: Tensor[T]): Tensor[
    T] {.noinit.}
  Source Edit
proc tanh[T: SomeFloat](t: Tensor[T]): Tensor[T] {.noinit.}
  Source Edit
proc tanh_backward[T](gradient: Tensor[T]; cached_tensor: Tensor[T]): Tensor[T] {.
    noinit.}
  Source Edit
Arraymancer Technical reference Tutorial Spellbook (How-To's) Under the hood