Fork me on GitHub
Arraymancer Technical reference Tutorial Spellbook (How-To's) Under the hood

Module optimizers

Types

Sgd[TT] = object
  params*: seq[Variable[TT]]
  lr*: float32
Stochastic gradient descent   Source Edit

Procs

proc zeroGrads(o: Optimizer)
  Source Edit
proc newSGD[T](params: varargs[Variable[Tensor[T]]]; learning_rate: T): Sgd[Tensor[T]] {.
deprecated: "Use the optimizer macro instead"
.}
  Source Edit
proc update(self: Sgd)
  Source Edit

Funcs

func optimizerSGD[M](model: M; learning_rate: SomeReal): Sgd[Tensor[SomeReal]]
Create a SGD optimizer that will update the model weight   Source Edit