Fork me on GitHub
Arraymancer Technical reference Tutorial Spellbook (How-To's) Under the hood

Module nnp_activation

Procs

proc sigmoid[T: SomeReal](t: Tensor[T]): Tensor[T] {.
noInit
.}
Logistic sigmoid activation function, :math:f(x) = 1 / (1 + exp(-x)) Note: Canonical sigmoid is not stable for large negative value Please use sigmoid_cross_entropy for the final layer for better stability and performance   Source Edit
proc relu[T](t: Tensor[T]): Tensor[T] {.
noInit
.}
  Source Edit
proc tanh[T: SomeReal](t: Tensor[T]): Tensor[T] {.
noInit
.}
  Source Edit
proc msigmoid[T: SomeReal](t: var Tensor[T])
Logistic sigmoid activation function, :math:f(x) = 1 / (1 + exp(-x)) Note: Canonical sigmoid is not stable for large negative value   Source Edit
proc mrelu[T](t: var Tensor[T])
  Source Edit
proc mtanh[T: SomeReal](t: var Tensor[T])
  Source Edit
proc sigmoid_backward[T](gradient: Tensor[T]; cached_tensor: Tensor[T]): Tensor[T] {.
noInit
.}
  Source Edit
proc relu_backward[T](gradient: Tensor[T]; cached_tensor: Tensor[T]): Tensor[T] {.
noInit
.}
  Source Edit
proc tanh_backward[T](gradient: Tensor[T]; cached_tensor: Tensor[T]): Tensor[T] {.
noInit
.}
  Source Edit