Index
Modules:
accessors
,
accessors_macros_read
,
accessors_macros_syntax
,
accessors_macros_write
,
accuracy_score
,
aggregate
,
algebra
,
algorithms
,
align_unroller
,
ast_utils
,
autograd
,
autograd_common
,
auxiliary_blas
,
auxiliary_lapack
,
blas_l3_gemm
,
blis
,
common_error_functions
,
compiler_optim_hints
,
complex
,
conv
,
conv2D
,
cpuinfo_x86
,
cross_entropy_losses
,
cublas
,
cuda
,
cuda_global_state
,
cudnn
,
cudnn_conv_interface
,
data_structure
,
datatypes
,
dbscan
,
decomposition
,
decomposition_lapack
,
decomposition_rand
,
deprecate
,
display
,
display_cuda
,
distances
,
distributions
,
dynamic_stack_arrays
,
einsum
,
embedding
,
exporting
,
filling_data
,
flatten
,
foreach
,
foreach_common
,
foreach_staged
,
functional
,
gates_basic
,
gates_blas
,
gates_hadamard
,
gates_reduce
,
gates_shapeshifting_concat_split
,
gates_shapeshifting_views
,
gcn
,
gemm
,
gemm_packing
,
gemm_prepacked
,
gemm_tiling
,
gemm_ukernel_avx
,
gemm_ukernel_avx2
,
gemm_ukernel_avx512
,
gemm_ukernel_avx_fma
,
gemm_ukernel_dispatch
,
gemm_ukernel_generator
,
gemm_ukernel_generic
,
gemm_ukernel_sse
,
gemm_ukernel_sse2
,
gemm_ukernel_sse4_1
,
gemm_utils
,
global_config
,
gru
,
higher_order_applymap
,
higher_order_foldreduce
,
imdb
,
incl_accessors_cuda
,
incl_higher_order_cuda
,
incl_kernels_cuda
,
init
,
init_colmajor
,
init_copy_cpu
,
init_copy_cuda
,
init_cpu
,
init_cuda
,
init_opencl
,
initialization
,
io
,
io_csv
,
io_hdf5
,
io_image
,
io_npy
,
io_stream_readers
,
kde
,
kdtree
,
kmeans
,
lapack
,
least_squares
,
least_squares_lapack
,
linear
,
linear_algebra
,
linear_systems
,
math_functions
,
math_ops_fusion
,
maxpool2D
,
mean_square_error_loss
,
memory
,
memory_optimization_hints
,
ml
,
mnist
,
naive_l2_gemv
,
neighbors
,
nested_containers
,
nlp
,
nn
,
nn_dsl
,
nn_primitives
,
nnp_activation
,
nnp_conv2d_cudnn
,
nnp_convolution
,
nnp_embedding
,
nnp_gru
,
nnp_linear
,
nnp_maxpooling
,
nnp_numerical_gradient
,
nnp_sigmoid_cross_entropy
,
nnp_softmax
,
nnp_softmax_cross_entropy
,
nnpack
,
nnpack_interface
,
opencl_backend
,
opencl_global_state
,
openmp
,
operators_blas_l1
,
operators_blas_l1_cuda
,
operators_blas_l1_opencl
,
operators_blas_l2l3
,
operators_blas_l2l3_cuda
,
operators_blas_l2l3_opencl
,
operators_broadcasted
,
operators_broadcasted_cuda
,
operators_broadcasted_opencl
,
operators_comparison
,
operators_logical
,
optim_ops_fusion
,
optimizers
,
overload
,
p_accessors
,
p_accessors_macros_desugar
,
p_accessors_macros_read
,
p_accessors_macros_write
,
p_activation
,
p_checks
,
p_complex
,
p_display
,
p_empty_tensors
,
p_init_cuda
,
p_init_opencl
,
p_kernels_interface_cuda
,
p_kernels_interface_opencl
,
p_logsumexp
,
p_nnp_checks
,
p_nnp_types
,
p_operator_blas_l2l3
,
p_shapeshifting
,
pca
,
relu
,
selectors
,
sequninit
,
shapeshifting
,
shapeshifting_cuda
,
shapeshifting_opencl
,
sigmoid
,
simd
,
softmax
,
solve_lapack
,
special_matrices
,
stats
,
std_version_types
,
syntactic_sugar
,
tanh
,
tensor
,
tensor_compare_helper
,
tensor_cuda
,
tensor_opencl
,
tokenizers
,
triangular
,
ufunc
,
util
.
API symbols
`!=.`:
operators_comparison: proc `!=.`[T](t: Tensor[T]; value: T): Tensor[bool]
operators_comparison: proc `!=.`[T](a, b: Tensor[T]): Tensor[bool]
operators_comparison: template `!=.`[T](value: T; t: Tensor[T]): Tensor[bool]
`$`:
display: proc `$`[T](t: Tensor[T]): string
display_cuda: proc `$`[T](t: CudaTensor[T]): string
dynamic_stack_arrays: proc `$`(a: DynamicStackArray): string
pca: proc `$`(pca: PCA_Detailed): string
`&`:
dynamic_stack_arrays: proc `&`(a, b: DynamicStackArray): DynamicStackArray
dynamic_stack_arrays: proc `&`[T](a: DynamicStackArray[T]; value: T): DynamicStackArray[T]
`*.=`:
operators_broadcasted: proc `*.=`[T: SomeNumber](t: var Tensor[Complex64]; val: T)
operators_broadcasted: proc `*.=`[T: SomeNumber | Complex[float32] | Complex[float64]](t: var Tensor[T]; val: T)
operators_broadcasted: proc `*.=`[T: SomeNumber | Complex[float32] | Complex[float64]](a: var Tensor[T]; b: Tensor[T])
operators_broadcasted_cuda: proc `*.=`[T: SomeFloat](a: var CudaTensor[T]; b: CudaTensor[T])
`*.`:
gates_hadamard: proc `*.`[TT](a, b: Variable[TT]): Variable[TT]
operators_broadcasted: proc `*.`[T: SomeNumber](val: T; t: Tensor[Complex64]): Tensor[Complex64]
operators_broadcasted: proc `*.`[T: SomeNumber | Complex[float32] | Complex[float64]](val: T; t: Tensor[T]): Tensor[ T]
operators_broadcasted: proc `*.`[T: SomeNumber](t: Tensor[Complex64]; val: T): Tensor[Complex64]
operators_broadcasted: proc `*.`[T: SomeNumber | Complex[float32] | Complex[float64]](t: Tensor[T]; val: T): Tensor[ T]
operators_broadcasted: proc `*.`[T: SomeNumber | Complex[float32] | Complex[float64]](a, b: Tensor[T]): Tensor[ T]
operators_broadcasted_cuda: proc `*.`[T: SomeFloat](a, b: CudaTensor[T]): CudaTensor[T]
operators_broadcasted_opencl: proc `*.`[T: SomeFloat](a, b: ClTensor[T]): ClTensor[T]
`*=`:
operators_blas_l1: proc `*=`[T: SomeNumber | Complex[float32] | Complex[float64]](t: var Tensor[T]; a: T)
operators_blas_l1_cuda: proc `*=`[T: SomeFloat](t: var CudaTensor[T]; a: T)
`*`:
gates_blas: proc `*`[TT](a, b: Variable[TT]): Variable[TT]
operators_blas_l1: proc `*`[T: SomeNumber | Complex[float32] | Complex[float64]](a: T; t: Tensor[T]): Tensor[ T]
operators_blas_l1: proc `*`[T: SomeNumber | Complex[float32] | Complex[float64]](t: Tensor[T]; a: T): Tensor[ T]
operators_blas_l1_cuda: proc `*`[T: SomeFloat](t: CudaTensor[T]; a: T): CudaTensor[T]
operators_blas_l1_cuda: proc `*`[T: SomeFloat](a: T; t: CudaTensor[T]): CudaTensor[T]
operators_blas_l2l3: proc `*`[T: Complex[float32] or Complex[float64]](a, b: Tensor[T]): Tensor[T]
operators_blas_l2l3: proc `*`[T: SomeNumber](a, b: Tensor[T]): Tensor[T]
operators_blas_l2l3_cuda: proc `*`[T: SomeFloat](a, b: CudaTensor[T]): CudaTensor[T]
operators_blas_l2l3_opencl: proc `*`[T: SomeFloat](a, b: ClTensor[T]): ClTensor[T]
`+.=`:
operators_broadcasted: proc `+.=`[T: SomeNumber](t: var Tensor[Complex64]; val: T)
operators_broadcasted: proc `+.=`[T: SomeNumber | Complex[float32] | Complex[float64]](t: var Tensor[T]; val: T)
operators_broadcasted: proc `+.=`[T: SomeNumber | Complex[float32] | Complex[float64]](a: var Tensor[T]; b: Tensor[T])
operators_broadcasted_cuda: proc `+.=`[T: SomeFloat](a: var CudaTensor[T]; b: CudaTensor[T])
operators_broadcasted_cuda: proc `+.=`[T: SomeFloat](t: var CudaTensor[T]; val: T)
`+.`:
operators_broadcasted: proc `+.`[T: SomeNumber](val: T; t: Tensor[Complex64]): Tensor[Complex64]
operators_broadcasted: proc `+.`[T: SomeNumber | Complex[float32] | Complex[float64]](val: T; t: Tensor[T]): Tensor[ T]
operators_broadcasted: proc `+.`[T: SomeNumber](t: Tensor[Complex64]; val: T): Tensor[Complex64]
operators_broadcasted: proc `+.`[T: SomeNumber | Complex[float32] | Complex[float64]](t: Tensor[T]; val: T): Tensor[ T]
operators_broadcasted: proc `+.`[T: SomeNumber | Complex[float32] | Complex[float64]](a, b: Tensor[T]): Tensor[ T]
operators_broadcasted_cuda: proc `+.`[T: SomeFloat](a, b: CudaTensor[T]): CudaTensor[T]
operators_broadcasted_cuda: proc `+.`[T: SomeFloat](t: CudaTensor[T]; val: T): CudaTensor[T]
operators_broadcasted_cuda: proc `+.`[T: SomeFloat](val: T; t: CudaTensor[T]): CudaTensor[T]
operators_broadcasted_opencl: proc `+.`[T: SomeFloat](a, b: ClTensor[T]): ClTensor[T]
`+=`:
operators_blas_l1: proc `+=`[T: SomeNumber | Complex[float32] | Complex[float64]](a: var Tensor[T]; b: Tensor[T])
operators_blas_l1_cuda: proc `+=`[T: SomeFloat](a: var CudaTensor[T]; b: CudaTensor[T])
operators_blas_l1_opencl: proc `+=`(dst`gensym286: var ClTensor[float32]; src`gensym286: ClTensor[float32])
operators_blas_l1_opencl: proc `+=`(dst`gensym286: var ClTensor[float32]; src`gensym286: ClTensor[float32])
operators_blas_l1_opencl: proc `+=`(dst`gensym329: var ClTensor[float64]; src`gensym329: ClTensor[float64])
`+`:
gates_basic: proc `+`[TT](a, b: Variable[TT]): Variable[TT]
gemm_utils: proc `+`(p: ptr; offset: int): type(p)
operators_blas_l1: proc `+`[T: SomeNumber | Complex[float32] | Complex[float64]](val: T; a: Tensor[T]): Tensor[ T]
operators_blas_l1: proc `+`[T: SomeNumber | Complex[float32] | Complex[float64]](a: Tensor[T]; val: T): Tensor[ T]
operators_blas_l1: proc `+`[T: SomeNumber | Complex[float32] | Complex[float64]](a, b: Tensor[T]): Tensor[ T]
operators_blas_l1_cuda: proc `+`[T: SomeFloat](a, b: CudaTensor[T]): CudaTensor[T]
operators_blas_l1_opencl: proc `+`(a`gensym30, b`gensym30: ClTensor[float32]): ClTensor[float32]
operators_blas_l1_opencl: proc `+`(a`gensym30, b`gensym30: ClTensor[float32]): ClTensor[float32]
operators_blas_l1_opencl: proc `+`(a`gensym101, b`gensym101: ClTensor[float64]): ClTensor[float64]
`-.=`:
operators_broadcasted: proc `-.=`[T: SomeNumber](t: var Tensor[Complex64]; val: T)
operators_broadcasted: proc `-.=`[T: SomeNumber | Complex[float32] | Complex[float64]](t: var Tensor[T]; val: T)
operators_broadcasted: proc `-.=`[T: SomeNumber | Complex[float32] | Complex[float64]](a: var Tensor[T]; b: Tensor[T])
operators_broadcasted_cuda: proc `-.=`[T: SomeFloat](a: var CudaTensor[T]; b: CudaTensor[T])
operators_broadcasted_cuda: proc `-.=`[T: SomeFloat](t: var CudaTensor[T]; val: T)
`-.`:
operators_broadcasted: proc `-.`[T: SomeNumber](val: T; t: Tensor[Complex64]): Tensor[Complex64]
operators_broadcasted: proc `-.`[T: SomeNumber | Complex[float32] | Complex[float64]](val: T; t: Tensor[T]): Tensor[ T]
operators_broadcasted: proc `-.`[T: SomeNumber](t: Tensor[Complex64]; val: T): Tensor[Complex64]
operators_broadcasted: proc `-.`[T: SomeNumber | Complex[float32] | Complex[float64]](t: Tensor[T]; val: T): Tensor[ T]
operators_broadcasted: proc `-.`[T: SomeNumber | Complex[float32] | Complex[float64]](a, b: Tensor[T]): Tensor[ T]
operators_broadcasted_cuda: proc `-.`[T: SomeFloat](a, b: CudaTensor[T]): CudaTensor[T]
operators_broadcasted_cuda: proc `-.`[T: SomeFloat](t: CudaTensor[T]; val: T): CudaTensor[T]
operators_broadcasted_cuda: proc `-.`[T: SomeFloat](val: T; t: CudaTensor[T]): CudaTensor[T]
operators_broadcasted_opencl: proc `-.`[T: SomeFloat](a, b: ClTensor[T]): ClTensor[T]
`-=`:
operators_blas_l1: proc `-=`[T: SomeNumber | Complex[float32] | Complex[float64]](a: var Tensor[T]; b: Tensor[T])
operators_blas_l1_cuda: proc `-=`[T: SomeFloat](a: var CudaTensor[T]; b: CudaTensor[T])
operators_blas_l1_opencl: proc `-=`(dst`gensym372: var ClTensor[float32]; src`gensym372: ClTensor[float32])
operators_blas_l1_opencl: proc `-=`(dst`gensym372: var ClTensor[float32]; src`gensym372: ClTensor[float32])
operators_blas_l1_opencl: proc `-=`(dst`gensym415: var ClTensor[float64]; src`gensym415: ClTensor[float64])
`-`:
gates_basic: proc `-`[TT](a, b: Variable[TT]): Variable[TT]
math_functions: proc `-`[T: SomeNumber](t: Tensor[T]): Tensor[T]
operators_blas_l1: proc `-`[T: SomeNumber | Complex[float32] | Complex[float64]](val: T; a: Tensor[T]): Tensor[ T]
operators_blas_l1: proc `-`[T: SomeNumber | Complex[float32] | Complex[float64]](a: Tensor[T]; val: T): Tensor[ T]
operators_blas_l1: proc `-`[T: SomeNumber | Complex[float32] | Complex[float64]](a, b: Tensor[T]): Tensor[ T]
operators_blas_l1_cuda: proc `-`[T: SomeFloat](a, b: CudaTensor[T]): CudaTensor[T]
operators_blas_l1_opencl: proc `-`(a`gensym168, b`gensym168: ClTensor[float32]): ClTensor[float32]
operators_blas_l1_opencl: proc `-`(a`gensym168, b`gensym168: ClTensor[float32]): ClTensor[float32]
operators_blas_l1_opencl: proc `-`(a`gensym227, b`gensym227: ClTensor[float64]): ClTensor[float64]
`.!=`:
operators_comparison: proc `.!=`[T](a, b: Tensor[T]): Tensor[bool]
`.*`:
operators_broadcasted: proc `.*`[T](val: T; t: Tensor[Complex64]): Tensor[Complex64]
operators_broadcasted: proc `.*`[T](val: T; t: Tensor[T]): Tensor[T]
operators_broadcasted: proc `.*`[T](t: Tensor[Complex64]; val: T): Tensor[Complex64]
operators_broadcasted: proc `.*`[T](t: Tensor[T]; val: T): Tensor[T]
operators_broadcasted: proc `.*`[T](a, b: Tensor[T]): Tensor[T]
operators_broadcasted_cuda: proc `.*`[T](a, b: CudaTensor[T]): CudaTensor[T]
operators_broadcasted_opencl: proc `.*`[T](a, b: ClTensor[T]): ClTensor[T]
`.+`:
operators_broadcasted: proc `.+`[T](val: T; t: Tensor[Complex64]): Tensor[Complex64]
operators_broadcasted: proc `.+`[T](val: T; t: Tensor[T]): Tensor[T]
operators_broadcasted: proc `.+`[T](t: Tensor[Complex64]; val: T): Tensor[Complex64]
operators_broadcasted: proc `.+`[T](t: Tensor[T]; val: T): Tensor[T]
operators_broadcasted: proc `.+`[T](a, b: Tensor[T]): Tensor[T]
operators_broadcasted_cuda: proc `.+`[T](a, b: CudaTensor[T]): CudaTensor[T]
operators_broadcasted_opencl: proc `.+`[T](a, b: ClTensor[T]): ClTensor[T]
`.-`:
operators_broadcasted: proc `.-`[T](val: T; t: Tensor[Complex64]): Tensor[Complex64]
operators_broadcasted: proc `.-`[T](val: T; t: Tensor[T]): Tensor[T]
operators_broadcasted: proc `.-`[T](t: Tensor[Complex64]; val: T): Tensor[Complex64]
operators_broadcasted: proc `.-`[T](t: Tensor[T]; val: T): Tensor[T]
operators_broadcasted: proc `.-`[T](a, b: Tensor[T]): Tensor[T]
operators_broadcasted_cuda: proc `.-`[T](a, b: CudaTensor[T]): CudaTensor[T]
operators_broadcasted_opencl: proc `.-`[T](a, b: ClTensor[T]): ClTensor[T]
`...`:
accessors_macros_syntax: const `...`
`..<`:
accessors_macros_syntax: proc `..<`(a: int; s: Step): SteppedSlice
`..^`:
accessors_macros_syntax: proc `..^`(a: int; s: Step): SteppedSlice
`..`:
accessors_macros_syntax: proc `..`(a: int; s: Step): SteppedSlice
`./`:
operators_broadcasted: proc `./`[T](val: T; t: Tensor[Complex64]): Tensor[Complex64]
operators_broadcasted: proc `./`[T](val: T; t: Tensor[T]): Tensor[T]
operators_broadcasted: proc `./`[T](t: Tensor[Complex64]; val: T): Tensor[Complex64]
operators_broadcasted: proc `./`[T](t: Tensor[T]; val: T): Tensor[T]
operators_broadcasted: proc `./`[T](a, b: Tensor[T]): Tensor[T]
operators_broadcasted_cuda: proc `./`[T](a, b: CudaTensor[T]): CudaTensor[T]
operators_broadcasted_opencl: proc `./`[T](a, b: ClTensor[T]): ClTensor[T]
`.<=`:
operators_comparison: proc `.<=`[T](a, b: Tensor[T]): Tensor[bool]
`.<`:
operators_comparison: proc `.<`[T](a, b: Tensor[T]): Tensor[bool]
`.=*`:
operators_broadcasted: proc `.=*`[T](t: var Tensor[Complex64]; val: T)
operators_broadcasted_cuda: proc `.=*`[T](a: var CudaTensor[T]; b: CudaTensor[T])
`.=+`:
operators_broadcasted: proc `.=+`[T](t: var Tensor[Complex64]; val: T)
operators_broadcasted_cuda: proc `.=+`[T](a: var CudaTensor[T]; b: CudaTensor[T])
`.=-`:
operators_broadcasted: proc `.=-`[T](t: var Tensor[Complex64]; val: T)
operators_broadcasted_cuda: proc `.=-`[T](a: var CudaTensor[T]; b: CudaTensor[T])
`.=/`:
operators_broadcasted: proc `.=/`[T](t: var Tensor[Complex64]; val: T)
operators_broadcasted_cuda: proc `.=/`[T](a: var CudaTensor[T]; b: CudaTensor[T])
`.==`:
operators_comparison: proc `.==`[T](a, b: Tensor[T]): Tensor[bool]
`.>=`:
operators_comparison: proc `.>=`[T](a, b: Tensor[T]): Tensor[bool]
`.>`:
operators_comparison: proc `.>`[T](a, b: Tensor[T]): Tensor[bool]
`.^=`:
operators_broadcasted: proc `.^=`[T](t: var Tensor[Complex64]; val: T)
`.^`:
operators_broadcasted: proc `.^`[T](val: T; t: Tensor[Complex64]): Tensor[Complex64]
operators_broadcasted: proc `.^`[T](base: T; t: Tensor[T]): Tensor[T]
operators_broadcasted: proc `.^`[T](t: Tensor[Complex64]; val: T): Tensor[Complex64]
operators_broadcasted: proc `.^`[T](t: Tensor[T]; exponent: T): Tensor[T]
`/.=`:
operators_broadcasted: proc `/.=`[T: SomeNumber](t: var Tensor[Complex64]; val: T)
operators_broadcasted: proc `/.=`[T: SomeNumber | Complex[float32] | Complex[float64]](t: var Tensor[T]; val: T)
operators_broadcasted: proc `/.=`[T: SomeNumber | Complex[float32] | Complex[float64]](a: var Tensor[T]; b: Tensor[T])
operators_broadcasted_cuda: proc `/.=`[T: SomeFloat](a: var CudaTensor[T]; b: CudaTensor[T])
`/.`:
operators_broadcasted: proc `/.`[T: SomeNumber](val: T; t: Tensor[Complex64]): Tensor[Complex64]
operators_broadcasted: proc `/.`[T: SomeNumber | Complex[float32] | Complex[float64]](val: T; t: Tensor[T]): Tensor[ T]
operators_broadcasted: proc `/.`[T: SomeNumber](t: Tensor[Complex64]; val: T): Tensor[Complex64]
operators_broadcasted: proc `/.`[T: SomeNumber | Complex[float32] | Complex[float64]](t: Tensor[T]; val: T): Tensor[ T]
operators_broadcasted: proc `/.`[T: SomeNumber | Complex[float32] | Complex[float64]](a, b: Tensor[T]): Tensor[ T]
operators_broadcasted_cuda: proc `/.`[T: SomeFloat](a, b: CudaTensor[T]): CudaTensor[T]
operators_broadcasted_cuda: proc `/.`[T: SomeFloat](val: T; t: CudaTensor[T]): CudaTensor[T]
operators_broadcasted_opencl: proc `/.`[T: SomeFloat](a, b: ClTensor[T]): ClTensor[T]
`/=`:
operators_blas_l1: proc `/=`[T: SomeFloat | Complex[float32] | Complex[float64]](t: var Tensor[T]; a: T)
operators_blas_l1: proc `/=`[T: SomeInteger](t: var Tensor[T]; a: T)
operators_blas_l1_cuda: proc `/=`[T: SomeFloat](t: var CudaTensor[T]; a: T)
`/`:
operators_blas_l1: proc `/`[T: SomeNumber | Complex[float32] | Complex[float64]](t: Tensor[T]; a: T): Tensor[ T]
operators_blas_l1_cuda: proc `/`[T: SomeFloat](t: CudaTensor[T]; val: T): CudaTensor[T]
`<.`:
operators_comparison: proc `<.`[T](t: Tensor[T]; value: T): Tensor[bool]
operators_comparison: proc `<.`[T](a, b: Tensor[T]): Tensor[bool]
operators_comparison: template `<.`[T](value: T; t: Tensor[T]): Tensor[bool]
`<=.`:
operators_comparison: proc `<=.`[T](t: Tensor[T]; value: T): Tensor[bool]
operators_comparison: proc `<=.`[T](a, b: Tensor[T]): Tensor[bool]
operators_comparison: template `<=.`[T](value: T; t: Tensor[T]): Tensor[bool]
`<`:
tensor_compare_helper: proc `<`[T](s1, s2: Tensor[T]): bool
`==.`:
operators_comparison: proc `==.`[T](t: Tensor[T]; value: T): Tensor[bool]
operators_comparison: proc `==.`[T](a, b: Tensor[T]): Tensor[bool]
operators_comparison: template `==.`[T](value: T; t: Tensor[T]): Tensor[bool]
`==`:
dynamic_stack_arrays: proc `==`(a, s: DynamicStackArray): bool
dynamic_stack_arrays: proc `==`[T](a: DynamicStackArray[T]; s: openArray[T]): bool
operators_comparison: proc `==`[T](a, b: Tensor[T]): bool
`>.`:
operators_comparison: proc `>.`[T](t: Tensor[T]; value: T): Tensor[bool]
operators_comparison: proc `>.`[T](a, b: Tensor[T]): Tensor[bool]
operators_comparison: template `>.`[T](value: T; t: Tensor[T]): Tensor[bool]
`>=.`:
operators_comparison: proc `>=.`[T](t: Tensor[T]; value: T): Tensor[bool]
operators_comparison: proc `>=.`[T](a, b: Tensor[T]): Tensor[bool]
operators_comparison: template `>=.`[T](value: T; t: Tensor[T]): Tensor[bool]
`@`:
dynamic_stack_arrays: proc `@`[T](a: DynamicStackArray[T]): seq[T]
`[]=`:
accessors_macros_write: macro `[]=`[T](t: var Tensor[T]; args: varargs[untyped]): untyped
accessors_macros_write: template `[]=`[T](t: Tensor[T]; args: varargs[untyped]): untyped
datatypes: template `[]=`[T](v: RawMutableView[T]; idx: int; val: T)
dynamic_stack_arrays: proc `[]=`[T](a: var DynamicStackArray[T]; idx: Index; v: T)
gemm_utils: template `[]=`[T](view: MatrixView[T]; row, col: Natural; value: T)
`[]`:
accessors_macros_read: macro `[]`[T](t: AnyTensor[T]; args: varargs[untyped]): untyped
datatypes: template `[]`[T](v: RawImmutableView[T]; idx: int): T
datatypes: template `[]`[T](v: RawMutableView[T]; idx: int): var T
dynamic_stack_arrays: proc `[]`[T](a: DynamicStackArray[T]; idx: Index): T
dynamic_stack_arrays: proc `[]`[T](a: var DynamicStackArray[T]; idx: Index): var T
dynamic_stack_arrays: proc `[]`[T](a: DynamicStackArray[T]; slice: Slice[int]): DynamicStackArray[T]
gates_shapeshifting_views: template `[]`[TT](v: Variable[TT]; args: varargs[untyped]): Variable[TT]
gemm_utils: template `[]`[T](view: MatrixView[T]; row, col: Natural): T
`^.=`:
operators_broadcasted: proc `^.=`[T: SomeNumber](t: var Tensor[Complex64]; val: T)
operators_broadcasted: proc `^.=`[T: SomeFloat | Complex[float32] | Complex[float64]](t: var Tensor[T]; exponent: T)
`^.`:
operators_broadcasted: proc `^.`[T: SomeNumber](val: T; t: Tensor[Complex64]): Tensor[Complex64]
operators_broadcasted: proc `^.`[T: SomeFloat | Complex[float32] | Complex[float64]](base: T; t: Tensor[T]): Tensor[ T]
operators_broadcasted: proc `^.`[T: SomeNumber](t: Tensor[Complex64]; val: T): Tensor[Complex64]
operators_broadcasted: proc `^.`[T: SomeFloat | Complex[float32] | Complex[float64]](t: Tensor[T]; exponent: T): Tensor[T]
`^`:
accessors_macros_syntax: proc `^`(s: Slice): SteppedSlice
accessors_macros_syntax: proc `^`(s: SteppedSlice): SteppedSlice
`_`:
accessors_macros_syntax: const `_`
`and`:
operators_logical: proc `and`(a, b: Tensor[bool]): Tensor[bool]
`div`:
operators_blas_l1: proc `div`[T: SomeInteger](t: Tensor[T]; a: T): Tensor[T]
`mod`:
operators_blas_l1: proc `mod`[T: SomeNumber](val: T; t: Tensor[T]): Tensor[T]
operators_blas_l1: proc `mod`[T: SomeNumber](t: Tensor[T]; val: T): Tensor[T]
operators_broadcasted: proc `mod`[T: SomeNumber](a, b: Tensor[T]): Tensor[T]
`not`:
operators_logical: proc `not`(a: Tensor[bool]): Tensor[bool]
`or`:
operators_logical: proc `or`(a, b: Tensor[bool]): Tensor[bool]
`xor`:
operators_logical: proc `xor`(a, b: Tensor[bool]): Tensor[bool]
`|+`:
accessors_macros_syntax: proc `|+`(b, step: int): Step
accessors_macros_syntax: proc `|+`(s: Slice[int]; step: int): SteppedSlice
accessors_macros_syntax: proc `|+`(ss: SteppedSlice; step: int): SteppedSlice
`|-`:
accessors_macros_syntax: proc `|-`(b, step: int): Step
accessors_macros_syntax: proc `|-`(s: Slice[int]; step: int): SteppedSlice
accessors_macros_syntax: proc `|-`(ss: SteppedSlice; step: int): SteppedSlice
`|`:
accessors_macros_syntax: proc `|`(b, step: int): Step
accessors_macros_syntax: proc `|`(s: Slice[int]; step: int): SteppedSlice
accessors_macros_syntax: proc `|`(ss: SteppedSlice; step: int): SteppedSlice
AAt:
auxiliary_blas: SyrkKind.AAt
abs:
math_functions: proc abs(t: Tensor[Complex[float32]]): Tensor[float32]
math_functions: proc abs(t: Tensor[Complex[float64]]): Tensor[float64]
math_functions: proc abs[T: SomeNumber](t: Tensor[T]): Tensor[T]
absolute_error:
common_error_functions: proc absolute_error[T: SomeFloat](y, y_true: T | Complex[T]): T
common_error_functions: proc absolute_error[T: SomeFloat](y, y_true: Tensor[T] | Tensor[Complex[T]]): Tensor[ T]
accuracy_score:
accuracy_score: proc accuracy_score[T](y_pred, y_true: Tensor[T]): float
Adam:
optimizers: object Adam
add:
dynamic_stack_arrays: proc add[T](a: var DynamicStackArray[T]; value: T)
AddGate:
gates_basic: type AddGate
address:
decomposition_lapack: template address(x: typed): untyped
advanceStridedIteration:
p_accessors: template advanceStridedIteration(coord, backstrides, iter_pos, t, iter_offset, iter_size: typed): untyped
align_raw_data:
memory: proc align_raw_data(T: typedesc; p: pointer): ptr UncheckedArray[T:type]
all:
aggregate: proc all[T](t: Tensor[T]): bool
allocCpuStorage:
datatypes: proc allocCpuStorage[T](storage: var CpuStorage[T]; size: int)
almostEqual:
math_functions: proc almostEqual[T: SomeFloat | Complex32 | Complex64](t1, t2: Tensor[T]; unitsInLastPlace: Natural = 4): Tensor[bool]
any:
aggregate: proc any[T](t: Tensor[T]): bool
AnyMetric:
distances: type AnyMetric
AnyTensor:
data_structure: type AnyTensor
append:
shapeshifting: proc append[T](t: Tensor[T]; values: Tensor[T]): Tensor[T]
shapeshifting: proc append[T](t: Tensor[T]; values: varargs[T]): Tensor[T]
apply:
higher_order_applymap: proc apply[T: KnownSupportsCopyMem](t: var Tensor[T]; f: proc (x: var T))
higher_order_applymap: proc apply[T](t: var Tensor[T]; f: T -> T)
apply2:
higher_order_applymap: proc apply2[T: KnownSupportsCopyMem; U](a: var Tensor[T]; f: proc (x: var T; y: T); b: Tensor[U])
higher_order_applymap: proc apply2[T: not KnownSupportsCopyMem; U](a: var Tensor[T]; f: proc (x: var T; y: T); b: Tensor[U])
apply2_inline:
higher_order_applymap: template apply2_inline[T: KnownSupportsCopyMem; U](dest: var Tensor[T]; src: Tensor[U]; op: untyped): untyped
higher_order_applymap: template apply2_inline[T: not KnownSupportsCopyMem; U](dest: var Tensor[T]; src: Tensor[U]; op: untyped): untyped
apply3_inline:
higher_order_applymap: template apply3_inline[T: KnownSupportsCopyMem; U, V](dest: var Tensor[T]; src1: Tensor[U]; src2: Tensor[V]; op: untyped): untyped
apply_inline:
higher_order_applymap: template apply_inline[T: KnownSupportsCopyMem](t: var Tensor[T]; op: untyped): untyped
higher_order_applymap: template apply_inline[T: not KnownSupportsCopyMem](t: var Tensor[T]; op: untyped): untyped
arange:
init_cpu: proc arange[T: SomeNumber](start, stop, step: T): Tensor[T]
init_cpu: template arange[T: SomeNumber](stop: T): Tensor[T]
init_cpu: template arange[T: SomeNumber](start, stop: T): Tensor[T]
arccos:
ufunc: proc arccos[T](t`gensym9: Tensor[T]): Tensor[T]
ufunc: proc arccos[T](t`gensym9: Tensor[T]): Tensor[T]
arccosh:
ufunc: proc arccosh[T](t`gensym12: Tensor[T]): Tensor[T]
ufunc: proc arccosh[T](t`gensym12: Tensor[T]): Tensor[T]
arcsin:
ufunc: proc arcsin[T](t`gensym10: Tensor[T]): Tensor[T]
ufunc: proc arcsin[T](t`gensym10: Tensor[T]): Tensor[T]
arcsinh:
ufunc: proc arcsinh[T](t`gensym13: Tensor[T]): Tensor[T]
ufunc: proc arcsinh[T](t`gensym13: Tensor[T]): Tensor[T]
arctan:
ufunc: proc arctan[T](t`gensym11: Tensor[T]): Tensor[T]
ufunc: proc arctan[T](t`gensym11: Tensor[T]): Tensor[T]
arctanh:
ufunc: proc arctanh[T](t`gensym14: Tensor[T]): Tensor[T]
ufunc: proc arctanh[T](t`gensym14: Tensor[T]): Tensor[T]
argmax:
aggregate: proc argmax[T](arg: Tensor[T]; axis: int): Tensor[int]
argmax_max:
aggregate: proc argmax_max[T: SomeNumber](arg: Tensor[T]; axis: int): tuple[ indices: Tensor[int], maxes: Tensor[T]]
argmin:
aggregate: proc argmin[T](arg: Tensor[T]; axis: int): Tensor[int]
argmin_min:
aggregate: proc argmin_min[T: SomeNumber](arg: Tensor[T]; axis: int): tuple[ indices: Tensor[int], mins: Tensor[T]]
argsort:
algorithms: proc argsort[T](t: Tensor[T]; order = SortOrder.Ascending; toCopy = false): Tensor[ int]
ArrayOfSlices:
accessors_macros_syntax: type ArrayOfSlices
asContiguous:
shapeshifting: proc asContiguous[T](t: Tensor[T]; layout: OrderType = rowMajor; force: bool = false): Tensor[ T]
shapeshifting_cuda: proc asContiguous[T: SomeFloat](t: CudaTensor[T]; layout: OrderType = colMajor; force: bool = false): CudaTensor[T]
asCudnnType:
cudnn: template asCudnnType[T: SomeFloat](typ: typedesc[T]): cudnnDataType_t
assume_aligned:
compiler_optim_hints: template assume_aligned[T](data: ptr T; alignment: static int = LASER_MEM_ALIGN): ptr T
memory_optimization_hints: template assume_aligned[T](data: ptr T; n: csize_t): ptr T
asType:
ufunc: proc asType[T: SomeNumber; U: Complex](t: Tensor[T]; typ: typedesc[U]): Tensor[U]
ufunc: proc asType[T; U: not Complex](t: Tensor[T]; typ: typedesc[U]): Tensor[U]
at:
syntactic_sugar: template at[T](t: Tensor[T]; args: varargs[untyped]): untyped
AtA:
auxiliary_blas: SyrkKind.AtA
atAxisIndex:
accessors: proc atAxisIndex[T](t: Tensor[T]; axis, idx: int; length = 1): Tensor[T]
atContiguousIndex:
accessors: proc atContiguousIndex[T](t: Tensor[T]; idx: int): T
accessors: proc atContiguousIndex[T](t: var Tensor[T]; idx: int): var T
atIndex:
p_accessors: proc atIndex[T](t: Tensor[T]; idx: varargs[int]): T
p_accessors: proc atIndex[T](t: var Tensor[T]; idx: varargs[int]): var T
atIndexMut:
p_accessors: proc atIndexMut[T](t: var Tensor[T]; idx: varargs[int]; val: T)
at_mut:
syntactic_sugar: template at_mut[T](t: var Tensor[T]; args: varargs[untyped]): untyped
attachGC:
openmp: template attachGC(): untyped
axis:
accessors: iterator axis[T](t: Tensor[T]; axis: int): Tensor[T]
accessors: iterator axis[T](t: Tensor[T]; axis, offset, size: int): Tensor[T]
backprop:
autograd_common: proc backprop[TT](v: Variable[TT])
Backward:
autograd_common: type Backward
bc:
shapeshifting: template bc(t: (Tensor | SomeNumber); shape: Metadata): untyped
shapeshifting: template bc(t: (Tensor | SomeNumber); shape: varargs[int]): untyped
blasMM_C_eq_aAB_p_bC:
p_operator_blas_l2l3: proc blasMM_C_eq_aAB_p_bC[T: SomeFloat | Complex[float32] | Complex[float64]]( alpha: T; a, b: Tensor[T]; beta: T; c: var Tensor[T])
blasMV_y_eq_aAx_p_by:
p_operator_blas_l2l3: proc blasMV_y_eq_aAx_p_by[T: SomeFloat | Complex[float32] | Complex[float64]]( alpha: T; a, x: Tensor[T]; beta: T; y: var Tensor[T])
box:
distributions: proc box(x: float): float
distributions: template box[T](t`gensym0: Tensor[T]): Tensor[float]
boxKernel:
kde: proc boxKernel(x`gensym1, x_i`gensym1, bw`gensym1: float): float
broadcast:
shapeshifting: proc broadcast[T: SomeNumber](val: T; shape: Metadata): Tensor[T]
shapeshifting: proc broadcast[T: SomeNumber](val: T; shape: varargs[int]): Tensor[T]
shapeshifting: proc broadcast[T](t: Tensor[T]; shape: Metadata): Tensor[T]
shapeshifting: proc broadcast[T](t: Tensor[T]; shape: varargs[int]): Tensor[T]
shapeshifting_cuda: proc broadcast(t: CudaTensor; shape: Metadata): CudaTensor
shapeshifting_cuda: proc broadcast(t: CudaTensor; shape: varargs[int]): CudaTensor
broadcast2:
shapeshifting: proc broadcast2[T](a, b: Tensor[T]): tuple[a, b: Tensor[T]]
shapeshifting_cuda: proc broadcast2[T](a, b: CudaTensor[T]): tuple[a, b: CudaTensor[T]]
shapeshifting_opencl: proc broadcast2[T](a, b: ClTensor[T]): tuple[a, b: ClTensor[T]]
broadcast2Impl:
p_shapeshifting: proc broadcast2Impl[T](a, b: AnyTensor[T]; result: var tuple[a, b: AnyTensor[T]])
broadcastImpl:
p_shapeshifting: proc broadcastImpl(t: var AnyTensor; shape: varargs[int] | Metadata)
cbrt:
ufunc: proc cbrt[T](t`gensym4: Tensor[T]): Tensor[T]
ufunc: proc cbrt[T](t`gensym4: Tensor[T]): Tensor[T]
ceil:
ufunc: proc ceil[T](t`gensym26: Tensor[T]): Tensor[T]
ufunc: proc ceil[T](t`gensym26: Tensor[T]): Tensor[T]
check_axis_index:
p_checks: proc check_axis_index(t: AnyTensor; axis, index, len: Natural)
check_concat:
p_checks: proc check_concat(t1, t2: Tensor; axis: int)
check_contiguous_index:
p_checks: proc check_contiguous_index(t: Tensor; idx: int)
check_ctx:
autograd_common: proc check_ctx(a, b: Variable)
check_dot_prod:
p_checks: proc check_dot_prod(a, b: AnyTensor)
check_elementwise:
p_checks: proc check_elementwise[T, U](a: ClTensor[T]; b: ClTensor[U])
p_checks: proc check_elementwise[T, U](a: CudaTensor[T]; b: CudaTensor[U])
p_checks: proc check_elementwise[T, U](a: Tensor[T]; b: Tensor[U])
check_index:
p_checks: proc check_index(t: Tensor; idx: varargs[int])
check_input_target:
p_nnp_checks: proc check_input_target[T](input, target: Tensor[T])
check_matmat:
p_checks: proc check_matmat(a, b: AnyTensor)
check_matvec:
p_checks: proc check_matvec(a, b: AnyTensor)
check_nested_elements:
p_checks: proc check_nested_elements(shape: Metadata; len: int)
check_reshape:
p_checks: proc check_reshape(t: AnyTensor; new_shape: Metadata)
p_checks: proc check_reshape(t: AnyTensor; new_shape: varargs[int])
check_shape:
p_checks: proc check_shape(a: Tensor; b: Tensor | openArray; relaxed_rank1_check: static[bool] = false)
check_size:
p_checks: proc check_size[T, U](a: Tensor[T]; b: Tensor[U])
check_squeezeAxis:
p_checks: proc check_squeezeAxis(t: AnyTensor; axis: int)
check_start_end:
p_checks: proc check_start_end(a, b: int; dim_size: int)
check_steps:
p_checks: proc check_steps(a, b, step: int)
check_unsqueezeAxis:
p_checks: proc check_unsqueezeAxis(t: AnyTensor; axis: int)
chunk:
gates_shapeshifting_concat_split: proc chunk[TT](v: Variable[TT]; nb_chunks: Positive; axis: Natural): seq[Variable[TT]]
shapeshifting: proc chunk[T](t: Tensor[T]; nb_chunks: Positive; axis: Natural): seq[Tensor[T]]
ChunkSplitGate:
gates_shapeshifting_concat_split: type ChunkSplitGate
circulant:
special_matrices: proc circulant[T](t: Tensor[T]; axis = -1; step = 1): Tensor[T]
clamp:
math_functions: proc clamp[T](t: Tensor[T]; min, max: T): Tensor[T]
classify:
math_functions: proc classify[T: SomeFloat](t: Tensor[T]): Tensor[FloatClass]
clContext0:
opencl_global_state: let clContext0
clDevice0:
opencl_global_state: let clDevice0
clMalloc:
opencl_backend: proc clMalloc[T](size: Natural): ptr UncheckedArray[T]
clone:
init_copy_cpu: proc clone[T](t: Tensor[T]; layout: OrderType = rowMajor): Tensor[T]
init_copy_cuda: proc clone[T](t: CudaTensor[T]): CudaTensor[T]
kdtree: proc clone[T](kd: KDTree[T]): KDTree[T]
kdtree: proc clone[T](n: Node[T]): Node[T]
clQueue0:
opencl_global_state: let clQueue0
ClStorage:
data_structure: object ClStorage
ClTensor:
data_structure: object ClTensor
col2im:
conv: proc col2im[T](input: Tensor[T]; channels, height, width: int; kernel_size: Size2D; padding: Size2D = (0, 0); stride: Size2D = (1, 1)): Tensor[T]
complex:
complex: proc complex[T: SomeNumber](re: Tensor[T]): auto
complex: proc complex[T: SomeNumber](re: Tensor[T]; im: Tensor[T]): auto
Complex32:
p_complex: converter Complex32[T: SomeNumber](x: T): Complex[float32]
Complex64:
p_complex: converter Complex64[T: SomeNumber](x: T): Complex[float64]
complex_imag:
complex: proc complex_imag[T: SomeNumber](im: Tensor[T]): auto
concat:
dynamic_stack_arrays: proc concat[T](dsas: varargs[DynamicStackArray[T]]): DynamicStackArray[T]
shapeshifting: proc concat[T](t_list: varargs[Tensor[T]]; axis: int): Tensor[T]
concatMap:
functional: proc concatMap[T](s: seq[T]; f: proc (ss: T): string): string
conjugate:
complex: proc conjugate[T: Complex32 | Complex64](t: Tensor[T]): Tensor[T]
contains:
algorithms: proc contains[T](t: Tensor[T]; item: T): bool
Context:
autograd_common: type Context
contiguousImpl:
p_shapeshifting: proc contiguousImpl[T](t: Tensor[T]; layout: OrderType; result: var Tensor[T])
Conv2D:
conv2D: object Conv2D
conv2d:
conv2D: proc conv2d[TT](input, weight: Variable[TT]; bias: Variable[TT] = nil; padding: Size2D = (0, 0); stride: Size2D = (1, 1)): Variable[TT]
nnp_conv2d_cudnn: proc conv2d[T: SomeFloat](input, kernel, bias: CudaTensor[T]; padding: SizeHW = [0, 0]; strides, dilation: SizeHW = [1, 1]): CudaTensor[T]
nnp_convolution: proc conv2d[T](input, weight, bias: Tensor[T]; padding: Size2D = (0, 0); stride: Size2D = (1, 1); algorithm = Conv2DAlgorithm.Im2ColGEMM): Tensor[ T]
Conv2DAlgorithm:
nnp_convolution: enum Conv2DAlgorithm
conv2d_backward:
nnp_conv2d_cudnn: proc conv2d_backward[T: SomeFloat](input, kernel, bias: CudaTensor[T]; padding: SizeHW = [0, 0]; strides, dilation: SizeHW = [1, 1]; grad_output: CudaTensor[T]; grad_input, grad_kernel, grad_bias: var CudaTensor[T])
nnp_convolution: proc conv2d_backward[T](input, weight, bias: Tensor[T]; padding: Size2D; stride: Size2D; grad_output: Tensor[T]; grad_input, grad_weight, grad_bias: var Tensor[T]; algorithm = Conv2DAlgorithm.Im2ColGEMM)
Conv2DGate:
conv2D: type Conv2DGate
ConvAlgoSpace:
cudnn_conv_interface: object ConvAlgoSpace
conv_bwd_data_algo_workspace:
cudnn_conv_interface: proc conv_bwd_data_algo_workspace[T: SomeFloat]( srcTensorDesc: cudnnTensorDescriptor_t; gradOutputTensorDesc: cudnnTensorDescriptor_t; kernelDesc: cudnnFilterDescriptor_t; convDesc: cudnnConvolutionDescriptor_t; gradInputTensorDesc: cudnnTensorDescriptor_t): ConvAlgoSpace[T, cudnnConvolutionBwdDataAlgo_t]
conv_bwd_kernel_algo_workspace:
cudnn_conv_interface: proc conv_bwd_kernel_algo_workspace[T: SomeFloat]( srcTensorDesc: cudnnTensorDescriptor_t; gradOutputTensorDesc: cudnnTensorDescriptor_t; gradKernelDesc: cudnnFilterDescriptor_t; convDesc: cudnnConvolutionDescriptor_t): ConvAlgoSpace[T, cudnnConvolutionBwdFilterAlgo_t]
ConvConfig:
cudnn_conv_interface: object ConvConfig
convolve:
math_functions: proc convolve[T: SomeNumber | Complex32 | Complex64](t1, t2: Tensor[T]; mode = ConvolveMode.full): Tensor[T]
ConvolveMode:
math_functions: enum ConvolveMode
convOutDims:
cudnn_conv_interface: proc convOutDims(input, kernel: CudaTensor; padding, strides, dilation: SizeHW): Metadata
copyFrom:
dynamic_stack_arrays: proc copyFrom(a: var DynamicStackArray; s: DynamicStackArray)
dynamic_stack_arrays: proc copyFrom(a: var DynamicStackArray; s: varargs[int])
copy_from:
filling_data: proc copy_from[T](dst: var Tensor[T]; src: Tensor[T])
copyFrom:
initialization: proc copyFrom[T](dst: var Tensor[T]; src: Tensor[T])
copyFromRaw:
initialization: proc copyFromRaw[T](dst: var Tensor[T]; buffer: ptr T; len: Natural)
copySign:
math_functions: proc copySign[T: SomeFloat](t1, t2: Tensor[T]): Tensor[T]
correlate:
math_functions: proc correlate[T: Complex32 | Complex64](t1, t2: Tensor[T]; mode = CorrelateMode.valid): Tensor[T]
math_functions: proc correlate[T: SomeNumber](t1, t2: Tensor[T]; mode = CorrelateMode.valid): Tensor[ T]
CorrelateMode:
math_functions: type CorrelateMode
cos:
ufunc: proc cos[T](t`gensym15: Tensor[T]): Tensor[T]
ufunc: proc cos[T](t`gensym15: Tensor[T]): Tensor[T]
cosh:
ufunc: proc cosh[T](t`gensym16: Tensor[T]): Tensor[T]
ufunc: proc cosh[T](t`gensym16: Tensor[T]): Tensor[T]
covariance_matrix:
stats: proc covariance_matrix[T: SomeFloat](x, y: Tensor[T]): Tensor[T]
cpu:
init_cuda: proc cpu[T: SomeFloat](t: CudaTensor[T]): Tensor[T]
init_opencl: proc cpu[T: SomeFloat](t: ClTensor[T]): Tensor[T]
CPUFeatureX86:
gemm_tiling: enum CPUFeatureX86
CpuStorage:
datatypes: type CpuStorage
cpuStorageFromBuffer:
datatypes: proc cpuStorageFromBuffer[T: KnownSupportsCopyMem](storage: var CpuStorage[T]; rawBuffer: pointer; size: int)
create_cache_dirs_if_necessary:
util: proc create_cache_dirs_if_necessary()
cswap:
complex: proc cswap[T: Complex32 | Complex64](t: Tensor[T]): Tensor[T]
cublas_axpy:
cublas: proc cublas_axpy[T: SomeFloat](n: int; alpha: T; x: ptr T; incx: int; y: ptr T; incy: int)
cublas_copy:
cublas: proc cublas_copy[T: SomeFloat](n: int; x: ptr T; incx: int; y: ptr T; incy: int)
cublas_dot:
cublas: proc cublas_dot[T: SomeFloat](n: int; x: ptr T; incx: int; y: ptr T; incy: int; output: ptr T)
cublas_geam:
cublas: proc cublas_geam[T: SomeFloat](transa, transb: cublasOperation_t; m, n: int; alpha: T; A: ptr T; lda: int; beta: T; B: ptr T; ldb: int; C: ptr T; ldc: int)
cublas_gemm:
cublas: proc cublas_gemm[T: SomeFloat](transa, transb: cublasOperation_t; m, n, k: int; alpha: T; A: ptr T; lda: int; B: ptr T; ldb: int; beta: T; C: ptr T; ldc: int)
cublas_gemmStridedBatched:
cublas: proc cublas_gemmStridedBatched[T: SomeFloat](transa, transb: cublasOperation_t; m, n, k: int; alpha: T; A: ptr T; lda: int; strideA: int; B: ptr T; ldb: int; strideB: int; beta: T; C: ptr T; ldc: int; strideC: int; batchCount: int)
cublas_gemv:
cublas: proc cublas_gemv[T: SomeFloat](trans: cublasOperation_t; m, n: int; alpha: T; A: ptr T; lda: int; x: ptr T; incx: int; beta: T; y: ptr T; incy: int)
cublasHandle0:
cuda_global_state: let cublasHandle0
cublas_scal:
cublas: proc cublas_scal[T: SomeFloat](n: int; alpha: T; x: ptr T; incx: int)
cuda:
init_cuda: proc cuda[T: SomeFloat](t: Tensor[T]): CudaTensor[T]
cuda_assign_call:
p_kernels_interface_cuda: template cuda_assign_call[T: SomeFloat](kernel_name: untyped; destination: var CudaTensor[T]; source: CudaTensor[T]): untyped
cuda_assign_glue:
p_kernels_interface_cuda: template cuda_assign_glue(kernel_name, op_name: string; binding_name: untyped): untyped
cuda_assignscal_call:
p_kernels_interface_cuda: template cuda_assignscal_call[T: SomeFloat](kernel_name: untyped; destination: var CudaTensor[T]; val: T): untyped
cuda_assignscal_glue:
p_kernels_interface_cuda: template cuda_assignscal_glue(kernel_name, op_name: string; binding_name: untyped): untyped
cuda_binary_call:
p_kernels_interface_cuda: template cuda_binary_call[T: SomeFloat](kernel_name: untyped; destination: var CudaTensor[T]; a, b: CudaTensor[T]): untyped
cuda_binary_glue:
p_kernels_interface_cuda: template cuda_binary_glue(kernel_name, op_name: string; binding_name: untyped): untyped
CUDA_HOF_BPG:
global_config: const CUDA_HOF_BPG
CUDA_HOF_TPB:
global_config: const CUDA_HOF_TPB
cuda_lscal_call:
p_kernels_interface_cuda: template cuda_lscal_call[T: SomeFloat](kernel_name: untyped; destination: var CudaTensor[T]; alpha: T; source: CudaTensor[T]): untyped
cuda_lscal_glue:
p_kernels_interface_cuda: template cuda_lscal_glue(kernel_name, op_name: string; binding_name: untyped): untyped
cudaMalloc:
cuda: proc cudaMalloc[T](size: Natural): ptr T
cuda_rscal_call:
p_kernels_interface_cuda: template cuda_rscal_call[T: SomeFloat](kernel_name: untyped; destination: var CudaTensor[T]; source: CudaTensor[T]; beta: T): untyped
cuda_rscal_glue:
p_kernels_interface_cuda: template cuda_rscal_glue(kernel_name, op_name: string; binding_name: untyped): untyped
CudaStorage:
data_structure: object CudaStorage
cudaStream0:
cuda_global_state: let cudaStream0
CudaTensor:
data_structure: object CudaTensor
cudnnHandle0:
cudnn: let cudnnHandle0
cumprod:
aggregate: proc cumprod[T](arg: Tensor[T]; axis: int = 0): Tensor[T]
cumsum:
aggregate: proc cumsum[T](arg: Tensor[T]; axis: int = 0): Tensor[T]
CustomMetric:
distances: object CustomMetric
cvtmask64_u64:
simd: proc cvtmask64_u64(a: mmask64): uint64
data=:
data_structure: proc data=[T](t: var Tensor[T]; s: seq[T])
dataArray:
data_structure: proc dataArray[T: KnownSupportsCopyMem](t: Tensor[T]): ptr UncheckedArray[T]
data_structure: proc dataArray[T: not KnownSupportsCopyMem](t: Tensor[T]): ptr UncheckedArray[T]
dbscan:
dbscan: proc dbscan[T: SomeFloat](X: Tensor[T]; eps: float; minSamples: int; metric: typedesc[AnyMetric] = Euclidean; p = 2.0): seq[int]
deallocCl:
opencl_backend: proc deallocCl[T](p: ref [ptr UncheckedArray[T]])
deallocCuda:
cuda: proc deallocCuda[T](p: ref [ptr T])
deepCopy:
initialization: proc deepCopy[T](dst: var Tensor[T]; src: Tensor[T])
degToRad:
ufunc: proc degToRad[T](t`gensym29: Tensor[T]): Tensor[T]
ufunc: proc degToRad[T](t`gensym29: Tensor[T]): Tensor[T]
delete:
dynamic_stack_arrays: proc delete(a: var DynamicStackArray; index: int)
desugar:
p_accessors_macros_desugar: macro desugar(args: untyped): void
detachGC:
openmp: template detachGC(): untyped
diag:
special_matrices: proc diag[T](d: Tensor[T]; k = 0; anti = false): Tensor[T]
diagonal:
special_matrices: proc diagonal[T](a: Tensor[T]; k = 0; anti = false): Tensor[T]
diff_discrete:
aggregate: proc diff_discrete[T](arg: Tensor[T]; n = 1; axis: int = -1): Tensor[T]
disp2d:
p_display: proc disp2d[T](t: Tensor[T]; alignBy = 6; alignSpacing = 3; precision = -1): string
distance:
distances: proc distance(metric: typedesc[Euclidean]; v, w: Tensor[float]; squared: static bool = false): float
distances: proc distance(metric: typedesc[Jaccard]; v, w: Tensor[float]): float
distances: proc distance(metric: typedesc[Manhattan]; v, w: Tensor[float]): float
distances: proc distance(metric: typedesc[Minkowski]; v, w: Tensor[float]; p = 2.0; squared: static bool = false): float
distanceMatrix:
distances: proc distanceMatrix(metric: typedesc[AnyMetric]; x, y: Tensor[float]; p = 2.0; squared: static bool = false): Tensor[float]
dot:
operators_blas_l1: proc dot[T: SomeFloat](a, b: Tensor[T]): T
operators_blas_l1: proc dot[T: SomeInteger](a, b: Tensor[T]): T
operators_blas_l1_cuda: proc dot[T: SomeFloat](a, b: CudaTensor[T]): T
operators_blas_l1_opencl: proc dot(a`gensym0, b`gensym0: ClTensor[float32]): float32
operators_blas_l1_opencl: proc dot(a`gensym15, b`gensym15: ClTensor[float64]): float64
dualStridedIteration:
p_accessors: template dualStridedIteration(strider: IterKind; t1, t2, iter_offset, iter_size: typed): untyped
dualStridedIterationYield:
p_accessors: template dualStridedIterationYield(strider: IterKind; t1data, t2data, i, t1_iter_pos, t2_iter_pos: typed)
DynamicStackArray:
dynamic_stack_arrays: object DynamicStackArray
einsum:
einsum: macro einsum(tensors: varargs[typed]; stmt: untyped): untyped
Ellipsis:
accessors_macros_syntax: object Ellipsis
elwise_div:
math_functions: proc elwise_div[T: SomeFloat](a, b: Tensor[T]): Tensor[T]
math_functions: proc elwise_div[T: SomeInteger](a, b: Tensor[T]): Tensor[T]
elwise_mul:
math_functions: proc elwise_mul[T](a, b: Tensor[T]): Tensor[T]
Embedding:
embedding: object Embedding
embedding:
embedding: proc embedding[TT; Idx: VocabIdx](input_vocab_id: Tensor[Idx]; weight: Variable[TT]; padding_idx: Idx = -1; scale_grad_by_freq: static[bool] = false): Variable[ TT]
nnp_embedding: proc embedding[T; Idx: byte or char or SomeInteger](vocab_id: Tensor[Idx]; weight: Tensor[T]): Tensor[T]
embedding_backward:
nnp_embedding: proc embedding_backward[T; Idx: byte or char or SomeInteger](dWeight: var Tensor[T]; vocab_id: Tensor[Idx]; dOutput: Tensor[T]; padding_idx: Idx; scale_grad_by_freq: static[bool] = false)
EmbeddingGate:
embedding: type EmbeddingGate
enumerate:
accessors: iterator enumerate[T](t: Tensor[T]): (int, T)
accessors: iterator enumerate[T](t: Tensor[T]; offset, size: int): (int, T)
enumerateAxis:
accessors: iterator enumerateAxis[T](t: Tensor[T]; axis: int): (int, Tensor[T])
accessors: iterator enumerateAxis[T](t: Tensor[T]; axis, offset, size: int): (int, Tensor[T])
enumerateZip:
accessors: iterator enumerateZip[T, U](t1: Tensor[T]; t2: Tensor[U]): (int, T, U)
accessors: iterator enumerateZip[T, U](t1: Tensor[T]; t2: Tensor[U]; offset, size: int): (int, T, U)
accessors: iterator enumerateZip[T, U, V](t1: Tensor[T]; t2: Tensor[U]; t3: Tensor[V]): (int, T, U, V)
accessors: iterator enumerateZip[T, U, V](t1: Tensor[T]; t2: Tensor[U]; t3: Tensor[V]; offset, size: int): (int, T, U, V)
epanechnikov:
distributions: proc epanechnikov(x: float): float
distributions: template epanechnikov[T](t`gensym3: Tensor[T]): Tensor[float]
epanechnikovKernel:
kde: proc epanechnikovKernel(x`gensym4, x_i`gensym4, bw`gensym4: float): float
erf:
ufunc: proc erf[T](t`gensym21: Tensor[T]): Tensor[T]
ufunc: proc erf[T](t`gensym21: Tensor[T]): Tensor[T]
erfc:
ufunc: proc erfc[T](t`gensym22: Tensor[T]): Tensor[T]
ufunc: proc erfc[T](t`gensym22: Tensor[T]): Tensor[T]
Euclidean:
distances: object Euclidean
exch_dim:
p_shapeshifting: proc exch_dim[T](t: Tensor[T]; dim1, dim2: int): Tensor[T]
exp:
ufunc: proc exp[T](t`gensym8: Tensor[T]): Tensor[T]
ufunc: proc exp[T](t`gensym8: Tensor[T]): Tensor[T]
expm1:
math_ops_fusion: proc expm1(x: float32): float32
math_ops_fusion: proc expm1(x: float64): float64
export_tensor:
exporting: proc export_tensor[T](t: Tensor[T]): tuple[shape: seq[int], strides: seq[int], data: seq[T]]
extract_cpu_simd:
gemm_tiling: macro extract_cpu_simd(ukernel: static MicroKernel): untyped
extract_c_unit_stride:
gemm_tiling: macro extract_c_unit_stride(ukernel: static MicroKernel): untyped
extract_mr:
gemm_tiling: macro extract_mr(ukernel: static MicroKernel): untyped
extract_nb_scalars:
gemm_tiling: macro extract_nb_scalars(ukernel: static MicroKernel): untyped
extract_nb_vecs_nr:
gemm_tiling: macro extract_nb_vecs_nr(ukernel: static MicroKernel): untyped
extract_nr:
gemm_tiling: macro extract_nr(ukernel: static MicroKernel): untyped
extract_pt:
gemm_tiling: macro extract_pt(ukernel: static MicroKernel): untyped
eye:
special_matrices: proc eye[T](shape: varargs[int]): Tensor[T]
fac:
ufunc: proc fac[T](t`gensym1: Tensor[T]): Tensor[T]
ufunc: proc fac[T](t`gensym1: Tensor[T]): Tensor[T]
fallbackMM_C_eq_aAB_p_bC:
p_operator_blas_l2l3: proc fallbackMM_C_eq_aAB_p_bC[T: SomeInteger](alpha: T; a, b: Tensor[T]; beta: T; c: var Tensor[T])
FancyIndex:
p_accessors_macros_read: FancySelectorKind.FancyIndex
FancyMaskAxis:
p_accessors_macros_read: FancySelectorKind.FancyMaskAxis
FancyMaskFull:
p_accessors_macros_read: FancySelectorKind.FancyMaskFull
FancyNone:
p_accessors_macros_read: FancySelectorKind.FancyNone
FancySelectorKind:
p_accessors_macros_read: enum FancySelectorKind
FancyUnknownAxis:
p_accessors_macros_read: FancySelectorKind.FancyUnknownAxis
FancyUnknownFull:
p_accessors_macros_read: FancySelectorKind.FancyUnknownFull
flatIter:
nested_containers: iterator flatIter[T](s: openArray[T]): auto
nested_containers: iterator flatIter(s: string): string
Flatten:
flatten: object Flatten
flatten:
gates_shapeshifting_views: proc flatten[TT](a: Variable[TT]): Variable[TT]
shapeshifting: proc flatten(t: Tensor): Tensor
floor:
ufunc: proc floor[T](t`gensym25: Tensor[T]): Tensor[T]
ufunc: proc floor[T](t`gensym25: Tensor[T]): Tensor[T]
floorMod:
math_functions: proc floorMod[T: SomeNumber](val: T; t: Tensor[T]): Tensor[T]
math_functions: proc floorMod[T: SomeNumber](t: Tensor[T]; val: T): Tensor[T]
math_functions: proc floorMod[T: SomeNumber](t1, t2: Tensor[T]): Tensor[T]
fold:
higher_order_foldreduce: proc fold[U, T](arg: Tensor[U]; start_val: T; f: (T, U) -> T): T
higher_order_foldreduce: proc fold[U, T](arg: Tensor[U]; start_val: Tensor[T]; f: (Tensor[T], Tensor[U]) -> Tensor[T]; axis: int): Tensor[T]
fold_axis_inline:
higher_order_foldreduce: template fold_axis_inline[T](arg: Tensor[T]; accumType: typedesc; fold_axis: int; op_initial, op_middle, op_final: untyped): untyped
fold_inline:
higher_order_foldreduce: template fold_inline[T](arg: Tensor[T]; op_initial, op_middle, op_final: untyped): untyped
forEach:
foreach: macro forEach(args: varargs[untyped]): untyped
forEachContiguous:
foreach: macro forEachContiguous(args: varargs[untyped]): untyped
forEachContiguousSerial:
foreach: macro forEachContiguousSerial(args: varargs[untyped]): untyped
forEachSerial:
foreach: macro forEachSerial(args: varargs[untyped]): untyped
forEachStaged:
foreach_staged: macro forEachStaged(args: varargs[untyped]): untyped
forEachStrided:
foreach: macro forEachStrided(args: varargs[untyped]): untyped
forEachStridedSerial:
foreach: macro forEachStridedSerial(args: varargs[untyped]): untyped
forward:
conv2D: proc forward[T](self: Conv2D[T]; input: Variable[Tensor[T]]): Variable[Tensor[T]]
embedding: proc forward[T; Idx: VocabIdx](self: Embedding[T]; input: Tensor[Idx]): Variable[ AnyTensor[T]]
flatten: proc forward[T](self: Flatten[T]; input: Variable[Tensor[T]]): Variable[Tensor[T]]
gcn: proc forward[T](self: GCNLayer[T]; input, adjacency: Variable[Tensor[T]]): Variable[ Tensor[T]]
gru: proc forward[T](self: GRULayer[T]; input, hidden0: Variable): tuple[ output, hiddenN: Variable]
linear: proc forward[T](self: Linear[T]; input: Variable[Tensor[T]]): Variable[Tensor[T]]
maxpool2D: proc forward[T](self: MaxPool2D[T]; input: Variable[Tensor[T]]): Variable[Tensor[T]]
frobenius_inner_prod:
lapack: proc frobenius_inner_prod[T](a, b: Tensor[T]): T
fromBuffer:
initialization: proc fromBuffer[T](rawBuffer: pointer; shape: varargs[int]): Tensor[T]
initialization: proc fromBuffer[T](rawBuffer: pointer; shape: varargs[int]; layout: static OrderType): Tensor[ T]
initialization: proc fromBuffer[T](rawBuffer: ptr UncheckedArray[T]; shape: varargs[int]): Tensor[T]
initialization: proc fromBuffer[T](rawBuffer: ptr UncheckedArray[T]; shape: varargs[int]; layout: static OrderType): Tensor[T]
full:
math_functions: ConvolveMode.full
gamma:
ufunc: proc gamma[T](t`gensym23: Tensor[T]): Tensor[T]
ufunc: proc gamma[T](t`gensym23: Tensor[T]): Tensor[T]
Gate:
autograd_common: type Gate
gauss:
distributions: proc gauss[T](x, mean, sigma: T; norm = false): float
distributions: proc gauss[T](x: Tensor[T]; mean, sigma: T; norm = false): Tensor[float]
gaussKernel:
kde: proc gaussKernel(x, x_i, bw: float): float
gcn:
gcn: proc gcn[TT](input, adjacency, weight: Variable[TT]; bias: Variable[TT] = nil): Variable[ TT]
GCNGate:
gcn: type GCNGate
GCNLayer:
gcn: object GCNLayer
gebb_ukernel:
gemm_ukernel_dispatch: proc gebb_ukernel[T; ukernel: static MicroKernel](kc: int; alpha: T; packedA, packedB: ptr UncheckedArray[T]; beta: T; vC: MatrixView[T])
gebb_ukernel_edge:
gemm_ukernel_dispatch: proc gebb_ukernel_edge[T; ukernel: static MicroKernel](mr, nr, kc: int; alpha: T; packedA, packedB: ptr UncheckedArray[T]; beta: T; vC: MatrixView[T])
gebb_ukernel_edge_epilogue:
gemm_ukernel_generic: proc gebb_ukernel_edge_epilogue[MR, NR: static int; T](alpha: T; AB: ptr array[MR, array[NR, T]]; beta: T; vC: MatrixView[T]; mr, nr: int)
gebb_ukernel_edge_fallback:
gemm_ukernel_generic: proc gebb_ukernel_edge_fallback[T; ukernel: static MicroKernel](mr, nr, kc: int; alpha: T; packedA, packedB: ptr UncheckedArray[T]; beta: T; vC: MatrixView[T])
gebb_ukernel_edge_float32_x86_AVX:
gemm_ukernel_avx: proc gebb_ukernel_edge_float32_x86_AVX[ukernel: static MicroKernel]( mr`gensym3, nr`gensym3, kc`gensym3: int; alpha`gensym3: float32; packedA`gensym3, packedB`gensym3: ptr UncheckedArray[float32]; beta`gensym3: float32; vC`gensym3: MatrixView[float32])
gebb_ukernel_edge_float32_x86_AVX512:
gemm_ukernel_avx512: proc gebb_ukernel_edge_float32_x86_AVX512[ukernel: static MicroKernel]( mr`gensym3, nr`gensym3, kc`gensym3: int; alpha`gensym3: float32; packedA`gensym3, packedB`gensym3: ptr UncheckedArray[float32]; beta`gensym3: float32; vC`gensym3: MatrixView[float32])
gebb_ukernel_edge_float32_x86_AVX_FMA:
gemm_ukernel_avx_fma: proc gebb_ukernel_edge_float32_x86_AVX_FMA[ukernel: static MicroKernel]( mr`gensym3, nr`gensym3, kc`gensym3: int; alpha`gensym3: float32; packedA`gensym3, packedB`gensym3: ptr UncheckedArray[float32]; beta`gensym3: float32; vC`gensym3: MatrixView[float32])
gebb_ukernel_edge_float32_x86_SSE:
gemm_ukernel_sse: proc gebb_ukernel_edge_float32_x86_SSE[ukernel: static MicroKernel]( mr`gensym3, nr`gensym3, kc`gensym3: int; alpha`gensym3: float32; packedA`gensym3, packedB`gensym3: ptr UncheckedArray[float32]; beta`gensym3: float32; vC`gensym3: MatrixView[float32])
gebb_ukernel_edge_float64_x86_AVX:
gemm_ukernel_avx: proc gebb_ukernel_edge_float64_x86_AVX[ukernel: static MicroKernel]( mr`gensym7, nr`gensym7, kc`gensym7: int; alpha`gensym7: float64; packedA`gensym7, packedB`gensym7: ptr UncheckedArray[float64]; beta`gensym7: float64; vC`gensym7: MatrixView[float64])
gebb_ukernel_edge_float64_x86_AVX512:
gemm_ukernel_avx512: proc gebb_ukernel_edge_float64_x86_AVX512[ukernel: static MicroKernel]( mr`gensym7, nr`gensym7, kc`gensym7: int; alpha`gensym7: float64; packedA`gensym7, packedB`gensym7: ptr UncheckedArray[float64]; beta`gensym7: float64; vC`gensym7: MatrixView[float64])
gebb_ukernel_edge_float64_x86_AVX_FMA:
gemm_ukernel_avx_fma: proc gebb_ukernel_edge_float64_x86_AVX_FMA[ukernel: static MicroKernel]( mr`gensym7, nr`gensym7, kc`gensym7: int; alpha`gensym7: float64; packedA`gensym7, packedB`gensym7: ptr UncheckedArray[float64]; beta`gensym7: float64; vC`gensym7: MatrixView[float64])
gebb_ukernel_edge_float64_x86_SSE2:
gemm_ukernel_sse2: proc gebb_ukernel_edge_float64_x86_SSE2[ukernel: static MicroKernel]( mr`gensym3, nr`gensym3, kc`gensym3: int; alpha`gensym3: float64; packedA`gensym3, packedB`gensym3: ptr UncheckedArray[float64]; beta`gensym3: float64; vC`gensym3: MatrixView[float64])
gebb_ukernel_edge_int32_x86_AVX2:
gemm_ukernel_avx2: proc gebb_ukernel_edge_int32_x86_AVX2[ukernel: static MicroKernel]( mr`gensym3, nr`gensym3, kc`gensym3: int; alpha`gensym3: int32; packedA`gensym3, packedB`gensym3: ptr UncheckedArray[int32]; beta`gensym3: int32; vC`gensym3: MatrixView[int32])
gebb_ukernel_edge_int32_x86_AVX512:
gemm_ukernel_avx512: proc gebb_ukernel_edge_int32_x86_AVX512[ukernel: static MicroKernel]( mr`gensym11, nr`gensym11, kc`gensym11: int; alpha`gensym11: int32; packedA`gensym11, packedB`gensym11: ptr UncheckedArray[int32]; beta`gensym11: int32; vC`gensym11: MatrixView[int32])
gebb_ukernel_edge_int32_x86_SSE2:
gemm_ukernel_sse2: proc gebb_ukernel_edge_int32_x86_SSE2[ukernel: static MicroKernel]( mr`gensym7, nr`gensym7, kc`gensym7: int; alpha`gensym7: int32; packedA`gensym7, packedB`gensym7: ptr UncheckedArray[int32]; beta`gensym7: int32; vC`gensym7: MatrixView[int32])
gebb_ukernel_edge_int32_x86_SSE4_1:
gemm_ukernel_sse4_1: proc gebb_ukernel_edge_int32_x86_SSE4_1[ukernel: static MicroKernel]( mr`gensym3, nr`gensym3, kc`gensym3: int; alpha`gensym3: int32; packedA`gensym3, packedB`gensym3: ptr UncheckedArray[int32]; beta`gensym3: int32; vC`gensym3: MatrixView[int32])
gebb_ukernel_edge_int64_x86_AVX512:
gemm_ukernel_avx512: proc gebb_ukernel_edge_int64_x86_AVX512[ukernel: static MicroKernel]( mr`gensym15, nr`gensym15, kc`gensym15: int; alpha`gensym15: int64; packedA`gensym15, packedB`gensym15: ptr UncheckedArray[int64]; beta`gensym15: int64; vC`gensym15: MatrixView[int64])
gebb_ukernel_edge_int64_x86_SSE2:
gemm_ukernel_sse2: proc gebb_ukernel_edge_int64_x86_SSE2[ukernel: static MicroKernel]( mr`gensym11, nr`gensym11, kc`gensym11: int; alpha`gensym11: int64; packedA`gensym11, packedB`gensym11: ptr UncheckedArray[int64]; beta`gensym11: int64; vC`gensym11: MatrixView[int64])
gebb_ukernel_epilogue_fallback:
gemm_ukernel_generic: proc gebb_ukernel_epilogue_fallback[MR, NR: static int; T](alpha: T; AB: ptr array[MR, array[NR, T]]; beta: T; vC: MatrixView[T])
gebb_ukernel_fallback:
gemm_ukernel_generic: proc gebb_ukernel_fallback[T; ukernel: static MicroKernel](kc: int; alpha: T; packedA, packedB: ptr UncheckedArray[T]; beta: T; vC: MatrixView[T])
gebb_ukernel_float32_x86_AVX:
gemm_ukernel_avx: proc gebb_ukernel_float32_x86_AVX[ukernel: static MicroKernel](kc`gensym2: int; alpha`gensym2: float32; packedA`gensym2, packedB`gensym2: ptr UncheckedArray[float32]; beta`gensym2: float32; vC`gensym2: MatrixView[float32])
gebb_ukernel_float32_x86_AVX512:
gemm_ukernel_avx512: proc gebb_ukernel_float32_x86_AVX512[ukernel: static MicroKernel](kc`gensym2: int; alpha`gensym2: float32; packedA`gensym2, packedB`gensym2: ptr UncheckedArray[float32]; beta`gensym2: float32; vC`gensym2: MatrixView[float32])
gebb_ukernel_float32_x86_AVX_FMA:
gemm_ukernel_avx_fma: proc gebb_ukernel_float32_x86_AVX_FMA[ukernel: static MicroKernel](kc`gensym2: int; alpha`gensym2: float32; packedA`gensym2, packedB`gensym2: ptr UncheckedArray[float32]; beta`gensym2: float32; vC`gensym2: MatrixView[float32])
gebb_ukernel_float32_x86_SSE:
gemm_ukernel_sse: proc gebb_ukernel_float32_x86_SSE[ukernel: static MicroKernel](kc`gensym2: int; alpha`gensym2: float32; packedA`gensym2, packedB`gensym2: ptr UncheckedArray[float32]; beta`gensym2: float32; vC`gensym2: MatrixView[float32])
gebb_ukernel_float64_x86_AVX:
gemm_ukernel_avx: proc gebb_ukernel_float64_x86_AVX[ukernel: static MicroKernel](kc`gensym6: int; alpha`gensym6: float64; packedA`gensym6, packedB`gensym6: ptr UncheckedArray[float64]; beta`gensym6: float64; vC`gensym6: MatrixView[float64])
gebb_ukernel_float64_x86_AVX512:
gemm_ukernel_avx512: proc gebb_ukernel_float64_x86_AVX512[ukernel: static MicroKernel](kc`gensym6: int; alpha`gensym6: float64; packedA`gensym6, packedB`gensym6: ptr UncheckedArray[float64]; beta`gensym6: float64; vC`gensym6: MatrixView[float64])
gebb_ukernel_float64_x86_AVX_FMA:
gemm_ukernel_avx_fma: proc gebb_ukernel_float64_x86_AVX_FMA[ukernel: static MicroKernel](kc`gensym6: int; alpha`gensym6: float64; packedA`gensym6, packedB`gensym6: ptr UncheckedArray[float64]; beta`gensym6: float64; vC`gensym6: MatrixView[float64])
gebb_ukernel_float64_x86_SSE2:
gemm_ukernel_sse2: proc gebb_ukernel_float64_x86_SSE2[ukernel: static MicroKernel](kc`gensym2: int; alpha`gensym2: float64; packedA`gensym2, packedB`gensym2: ptr UncheckedArray[float64]; beta`gensym2: float64; vC`gensym2: MatrixView[float64])
gebb_ukernel_int32_x86_AVX2:
gemm_ukernel_avx2: proc gebb_ukernel_int32_x86_AVX2[ukernel: static MicroKernel](kc`gensym2: int; alpha`gensym2: int32; packedA`gensym2, packedB`gensym2: ptr UncheckedArray[int32]; beta`gensym2: int32; vC`gensym2: MatrixView[int32])
gebb_ukernel_int32_x86_AVX512:
gemm_ukernel_avx512: proc gebb_ukernel_int32_x86_AVX512[ukernel: static MicroKernel](kc`gensym10: int; alpha`gensym10: int32; packedA`gensym10, packedB`gensym10: ptr UncheckedArray[int32]; beta`gensym10: int32; vC`gensym10: MatrixView[int32])
gebb_ukernel_int32_x86_SSE2:
gemm_ukernel_sse2: proc gebb_ukernel_int32_x86_SSE2[ukernel: static MicroKernel](kc`gensym6: int; alpha`gensym6: int32; packedA`gensym6, packedB`gensym6: ptr UncheckedArray[int32]; beta`gensym6: int32; vC`gensym6: MatrixView[int32])
gebb_ukernel_int32_x86_SSE4_1:
gemm_ukernel_sse4_1: proc gebb_ukernel_int32_x86_SSE4_1[ukernel: static MicroKernel](kc`gensym2: int; alpha`gensym2: int32; packedA`gensym2, packedB`gensym2: ptr UncheckedArray[int32]; beta`gensym2: int32; vC`gensym2: MatrixView[int32])
gebb_ukernel_int64_x86_AVX512:
gemm_ukernel_avx512: proc gebb_ukernel_int64_x86_AVX512[ukernel: static MicroKernel](kc`gensym14: int; alpha`gensym14: int64; packedA`gensym14, packedB`gensym14: ptr UncheckedArray[int64]; beta`gensym14: int64; vC`gensym14: MatrixView[int64])
gebb_ukernel_int64_x86_SSE2:
gemm_ukernel_sse2: proc gebb_ukernel_int64_x86_SSE2[ukernel: static MicroKernel](kc`gensym10: int; alpha`gensym10: int64; packedA`gensym10, packedB`gensym10: ptr UncheckedArray[int64]; beta`gensym10: int64; vC`gensym10: MatrixView[int64])
gebp_mkernel:
gemm: proc gebp_mkernel[T; ukernel: static MicroKernel](mc, nc, kc: int; alpha: T; packA, packB: ptr UncheckedArray[T]; beta: T; mcncC: MatrixView[T])
gelsd:
least_squares_lapack: proc gelsd[T: SomeFloat](a, b: Tensor[T]; solution, residuals: var Tensor[T]; singular_values: var Tensor[T]; matrix_rank: var int; rcond = -1.T)
gemm:
operators_blas_l2l3: proc gemm[T: SomeFloat | Complex](alpha: T; A, B: Tensor[T]; beta: T; C: var Tensor[T])
operators_blas_l2l3: proc gemm[T: SomeInteger](alpha: T; A, B: Tensor[T]; beta: T; C: var Tensor[T])
operators_blas_l2l3: proc gemm[T: SomeNumber](A, B: Tensor[T]; C: var Tensor[T])
gemm_nn_fallback:
blas_l3_gemm: proc gemm_nn_fallback[T](m, n, k: int; alpha: T; A: seq[T]; offA: int; incRowA, incColA: int; B: seq[T]; offB: int; incRowB, incColB: int; beta: T; C: var seq[T]; offC: int; incRowC, incColc: int)
gemm_packed:
gemm_prepacked: proc gemm_packed[T: SomeNumber](M, N, K: int; alpha: T; packedA: ptr (T or UncheckedArray[T]); packedB: ptr (T or UncheckedArray[T]); beta: T; C: ptr (T or UncheckedArray[T]); rowStrideC, colStrideC: int)
gemm_prepackA:
gemm_prepacked: proc gemm_prepackA[T](dst_packedA: ptr (T or UncheckedArray[T]); M, N, K: int; src_A: ptr T; rowStrideA, colStrideA: int)
gemm_prepackA_mem_required:
gemm_prepacked: proc gemm_prepackA_mem_required(T: typedesc; M, N, K: int): int
gemm_prepackA_mem_required_impl:
gemm_prepacked: proc gemm_prepackA_mem_required_impl(ukernel: static MicroKernel; T: typedesc; M, N, K: int): int
gemm_prepackB:
gemm_prepacked: proc gemm_prepackB[T](dst_packedB: ptr (T or UncheckedArray[T]); M, N, K: int; src_B: ptr T; rowStrideB, colStrideB: int)
gemm_prepackB_mem_required:
gemm_prepacked: proc gemm_prepackB_mem_required(T: type; M, N, K: int): int
gemm_prepackB_mem_required_impl:
gemm_prepacked: proc gemm_prepackB_mem_required_impl(ukernel: static MicroKernel; T: typedesc; M, N, K: int): int
gemm_strided:
gemm: proc gemm_strided[T: SomeNumber and not (uint32 | uint64 | uint | int)](M, N, K: int; alpha: T; A: ptr T; rowStrideA, colStrideA: int; B: ptr T; rowStrideB, colStrideB: int; beta: T; C: ptr T; rowStrideC, colStrideC: int)
gemm: proc gemm_strided[T: uint32 | uint64 | uint | int](M, N, K: int; alpha: T; A: ptr T; rowStrideA, colStrideA: int; B: ptr T; rowStrideB, colStrideB: int; beta: T; C: ptr T; rowStrideC, colStrideC: int)
gemv:
operators_blas_l2l3: proc gemv[T: SomeFloat | Complex](alpha: T; A: Tensor[T]; x: Tensor[T]; beta: T; y: var Tensor[T])
operators_blas_l2l3: proc gemv[T: SomeInteger](alpha: T; A: Tensor[T]; x: Tensor[T]; beta: T; y: var Tensor[T])
gen_cl_apply2:
p_kernels_interface_opencl: template gen_cl_apply2(kern_name, ctype, op: string): string
gen_cl_apply3:
p_kernels_interface_opencl: template gen_cl_apply3(kern_name, ctype, op: string): string
genClInfixOp:
p_kernels_interface_opencl: template genClInfixOp(T: typedesc; ctype: string; procName: untyped; cName: string; cInfixOp: string; exported: static[bool] = true): untyped
genClInPlaceOp:
p_kernels_interface_opencl: template genClInPlaceOp(T: typedesc; ctype: string; procName: untyped; cName: string; cInfixOp: string; exported: static[bool] = true): untyped
geomspace:
init_cpu: proc geomspace[T: SomeFloat](start, stop: T; num: int; endpoint = true): Tensor[float]
geqrf:
decomposition_lapack: proc geqrf[T: SupportedDecomposition](Q: var Tensor[T]; tau: var seq[T]; scratchspace: var seq[T])
gesdd:
decomposition_lapack: proc gesdd[T: SupportedDecomposition; X: SupportedDecomposition](a: var Tensor[T]; U: var Tensor[T]; S: var Tensor[X]; Vh: var Tensor[T]; scratchspace: var seq[T])
gesv:
solve_lapack: proc gesv[T: SomeFloat](a, b: var Tensor[T]; pivot_indices: var seq[int32])
get_cache_dir:
util: proc get_cache_dir(): string
getContiguousIndex:
p_accessors: proc getContiguousIndex[T](t: Tensor[T]; idx: int): int
get_data_ptr:
data_structure: proc get_data_ptr[T](t: CudaTensor[T] or ClTensor[T]): ptr T
data_structure: proc get_data_ptr[T: not KnownSupportsCopyMem](t: AnyTensor[T]): ptr T
data_structure: proc get_data_ptr[T: KnownSupportsCopyMem](t: Tensor[T]): ptr T
getFancySelector:
p_accessors_macros_read: proc getFancySelector(ast: NimNode; axis: var int; selector: var NimNode): FancySelectorKind
getIndex:
p_accessors: proc getIndex[T](t: Tensor[T]; idx: varargs[int]): int
get_num_tiles:
gemm_tiling: proc get_num_tiles(dim_size, tile_size: int): int
get_offset_ptr:
data_structure: proc get_offset_ptr[T](t: CudaTensor[T] or ClTensor[T]): ptr T
data_structure: proc get_offset_ptr[T: not KnownSupportsCopyMem](t: AnyTensor[T]): ptr T
data_structure: proc get_offset_ptr[T: KnownSupportsCopyMem](t: Tensor[T]): ptr T
getrf:
decomposition_lapack: proc getrf[T: SupportedDecomposition](lu: var Tensor[T]; pivot_indices: var seq[int32])
getShape:
nested_containers: proc getShape[T](s: openArray[T]; parent_shape = Metadata()): Metadata
nested_containers: proc getShape(s: string; parent_shape = Metadata()): Metadata
getSubType:
ast_utils: macro getSubType(TT: typedesc): untyped
gru:
gru: proc gru[TT](input, hidden0: Variable[TT]; W3s0, W3sN, U3s: Variable[TT]; bW3s, bU3s: Variable[TT]): tuple[output, hiddenN: Variable[TT]]
gru_backward:
nnp_gru: proc gru_backward[T: SomeFloat](dInput, dHidden0, dW3s0, dW3sN, dU3s, dbW3s, dbU3s: var Tensor[ T]; dOutput, dHiddenN: Tensor[T]; cached_inputs: seq[Tensor[T]]; cached_hiddens: seq[seq[Tensor[T]]]; W3s0, W3sN, U3s, rs, zs, ns, Uhs: Tensor[T])
gru_cell_backward:
nnp_gru: proc gru_cell_backward[T: SomeFloat](dx, dh, dW3, dU3, dbW3, dbU3: var Tensor[T]; dnext: Tensor[T]; x, h, W3, U3: Tensor[T]; r, z, n, Uh: Tensor[T])
gru_cell_forward:
nnp_gru: proc gru_cell_forward[T: SomeFloat](input, W3, U3, bW3, bU3: Tensor[T]; r, z, n, Uh, hidden: var Tensor[T])
gru_cell_inference:
nnp_gru: proc gru_cell_inference[T: SomeFloat](input: Tensor[T]; W3, U3, bW3, bU3: Tensor[T]; hidden: var Tensor[T])
gru_forward:
nnp_gru: proc gru_forward[T: SomeFloat](input: Tensor[T]; W3s0, W3sN: Tensor[T]; U3s, bW3s, bU3s: Tensor[T]; rs, zs, ns, Uhs: var Tensor[T]; output, hidden: var Tensor[T]; cached_inputs: var seq[Tensor[T]]; cached_hiddens: var seq[seq[Tensor[T]]])
GRUGate:
gru: type GRUGate
gru_inference:
nnp_gru: proc gru_inference[T: SomeFloat](input: Tensor[T]; W3s0, W3sN: Tensor[T]; U3s, bW3s, bU3s: Tensor[T]; output, hidden: var Tensor[T])
GRULayer:
gru: object GRULayer
HadamardGate:
gates_hadamard: type HadamardGate
hankel:
special_matrices: proc hankel[T](c: Tensor[T]): Tensor[T]
special_matrices: proc hankel[T](c, r: Tensor[T]): Tensor[T]
has3DNow:
cpuinfo_x86: proc has3DNow(): bool
has3DNowEnhanced:
cpuinfo_x86: proc has3DNowEnhanced(): bool
hasAbm:
cpuinfo_x86: proc hasAbm(): bool
hasAdx:
cpuinfo_x86: proc hasAdx(): bool
hasAes:
cpuinfo_x86: proc hasAes(): bool
hasAmdv:
cpuinfo_x86: proc hasAmdv(): bool
hasAvx:
cpuinfo_x86: proc hasAvx(): bool
hasAvx2:
cpuinfo_x86: proc hasAvx2(): bool
hasAvx512bfloat16:
cpuinfo_x86: proc hasAvx512bfloat16(): bool
hasAvx512bitalg:
cpuinfo_x86: proc hasAvx512bitalg(): bool
hasAvx512bw:
cpuinfo_x86: proc hasAvx512bw(): bool
hasAvx512cd:
cpuinfo_x86: proc hasAvx512cd(): bool
hasAvx512dq:
cpuinfo_x86: proc hasAvx512dq(): bool
hasAvx512er:
cpuinfo_x86: proc hasAvx512er(): bool
hasAvx512f:
cpuinfo_x86: proc hasAvx512f(): bool
hasAvx512fmaps4:
cpuinfo_x86: proc hasAvx512fmaps4(): bool
hasAvx512ifma:
cpuinfo_x86: proc hasAvx512ifma(): bool
hasAvx512pf:
cpuinfo_x86: proc hasAvx512pf(): bool
hasAvx512vbmi:
cpuinfo_x86: proc hasAvx512vbmi(): bool
hasAvx512vbmi2:
cpuinfo_x86: proc hasAvx512vbmi2(): bool
hasAvx512vl:
cpuinfo_x86: proc hasAvx512vl(): bool
hasAvx512vnni:
cpuinfo_x86: proc hasAvx512vnni(): bool
hasAvx512vnniw4:
cpuinfo_x86: proc hasAvx512vnniw4(): bool
hasAvx512vp2intersect:
cpuinfo_x86: proc hasAvx512vp2intersect(): bool
hasAvx512vpopcntdq:
cpuinfo_x86: proc hasAvx512vpopcntdq(): bool
hasBmi1:
cpuinfo_x86: proc hasBmi1(): bool
hasBmi2:
cpuinfo_x86: proc hasBmi2(): bool
hasCas16B:
cpuinfo_x86: proc hasCas16B(): bool
hasCas8B:
cpuinfo_x86: proc hasCas8B(): bool
hasClflush:
cpuinfo_x86: proc hasClflush(): bool
hasClflushOpt:
cpuinfo_x86: proc hasClflushOpt(): bool
hasClwb:
cpuinfo_x86: proc hasClwb(): bool
hasFloat16c:
cpuinfo_x86: proc hasFloat16c(): bool
hasFma3:
cpuinfo_x86: proc hasFma3(): bool
hasFma4:
cpuinfo_x86: proc hasFma4(): bool
hasGfni:
cpuinfo_x86: proc hasGfni(): bool
hasIntelVtx:
cpuinfo_x86: proc hasIntelVtx(): bool
hasMmx:
cpuinfo_x86: proc hasMmx(): bool
hasMmxExt:
cpuinfo_x86: proc hasMmxExt(): bool
hasMovBigEndian:
cpuinfo_x86: proc hasMovBigEndian(): bool
hasMpx:
cpuinfo_x86: proc hasMpx(): bool
hasNxBit:
cpuinfo_x86: proc hasNxBit(): bool
hasPclmulqdq:
cpuinfo_x86: proc hasPclmulqdq(): bool
hasPopcnt:
cpuinfo_x86: proc hasPopcnt(): bool
hasPrefetch:
cpuinfo_x86: proc hasPrefetch(): bool
hasPrefetchWT1:
cpuinfo_x86: proc hasPrefetchWT1(): bool
hasRdrand:
cpuinfo_x86: proc hasRdrand(): bool
hasRdseed:
cpuinfo_x86: proc hasRdseed(): bool
hasSgx:
cpuinfo_x86: proc hasSgx(): bool
hasSha:
cpuinfo_x86: proc hasSha(): bool
hasSimultaneousMultithreading:
cpuinfo_x86: proc hasSimultaneousMultithreading(): bool
hasSse:
cpuinfo_x86: proc hasSse(): bool
hasSse2:
cpuinfo_x86: proc hasSse2(): bool
hasSse3:
cpuinfo_x86: proc hasSse3(): bool
hasSse41:
cpuinfo_x86: proc hasSse41(): bool
hasSse42:
cpuinfo_x86: proc hasSse42(): bool
hasSse4a:
cpuinfo_x86: proc hasSse4a(): bool
hasSsse3:
cpuinfo_x86: proc hasSsse3(): bool
hasTsxHle:
cpuinfo_x86: proc hasTsxHle(): bool
hasTsxRtm:
cpuinfo_x86: proc hasTsxRtm(): bool
hasType:
ast_utils: proc hasType(x: NimNode; t: static[string]): bool
hasVaes:
cpuinfo_x86: proc hasVaes(): bool
hasVpclmulqdq:
cpuinfo_x86: proc hasVpclmulqdq(): bool
hasX87fpu:
cpuinfo_x86: proc hasX87fpu(): bool
hasXop:
cpuinfo_x86: proc hasXop(): bool
high:
dynamic_stack_arrays: proc high(a: DynamicStackArray): int
hilbert:
special_matrices: proc hilbert(n: int; T: typedesc[SomeFloat]): Tensor[T]
identity:
special_matrices: proc identity[T](n: int): Tensor[T]
ijgrid:
special_matrices: MeshGridIndexing.ijgrid
im2col:
conv: proc im2col[T](input: Tensor[T]; kernel_size: Size2D; padding: Size2D = (0, 0); stride: Size2D = (1, 1); result: var Tensor[T])
Im2ColGEMM:
nnp_convolution: Conv2DAlgorithm.Im2ColGEMM
im2colgemm_conv2d:
conv: proc im2colgemm_conv2d[T](input, kernel, bias: Tensor[T]; padding: Size2D = (0, 0); stride: Size2D = (1, 1)): Tensor[T]
im2colgemm_conv2d_gradient:
conv: proc im2colgemm_conv2d_gradient[T](input, kernel: Tensor[T]; padding: Size2D = (0, 0); stride: Size2D = (1, 1); grad_output: Tensor[T]; grad_input, grad_weight: var Tensor[T])
imag:
complex: proc imag[T: SomeFloat](t: Tensor[Complex[T]]): Tensor[T]
imag=:
complex: proc imag=[T: SomeFloat](t: var Tensor[Complex[T]]; val: T)
complex: proc imag=[T: SomeFloat](t: var Tensor[Complex[T]]; val: Tensor[T])
implDeprecatedBy:
deprecate: macro implDeprecatedBy(oldName: untyped; replacement: typed; exported: static bool): untyped
index_fill:
selectors: proc index_fill[T; Idx: byte or char or SomeInteger](t: var Tensor[T]; axis: int; indices: openArray[Idx]; value: T)
selectors: proc index_fill[T; Idx: byte or char or SomeInteger](t: var Tensor[T]; axis: int; indices: Tensor[Idx]; value: T)
index_select:
selectors: proc index_select[T; Idx: byte or char or SomeInteger](t: Tensor[T]; axis: int; indices: openArray[Idx]): Tensor[T]
selectors: proc index_select[T; Idx: byte or char or SomeInteger](t: Tensor[T]; axis: int; indices: Tensor[Idx]): Tensor[T]
infer_shape:
p_shapeshifting: proc infer_shape(t: Tensor; new_shape: varargs[int]): seq[int]
init:
conv2D: proc init[T](ctx: Context[Tensor[T]]; layerType: typedesc[Conv2D[T]]; inShape: seq[int]; outChannels: int; kernelSize: Size2D; padding: Size2D = (0, 0); stride: Size2D = (1, 1)): Conv2D[T]
embedding: proc init[T](ctx: Context[Tensor[T]]; layerType: typedesc[Embedding[T]]; vocabSize, embedSize: int; paddingIdx: VocabIdx = -1): Embedding[T]
flatten: proc init[T](ctx: Context[Tensor[T]]; layerType: typedesc[Flatten[T]]; inShape: seq[int]): Flatten[T]
gcn: proc init[T](ctx: Context[Tensor[T]]; layerType: typedesc[GCNLayer[T]]; numInput, numOutput: int): GCNLayer[T]
gru: proc init[T](ctx: Context[Tensor[T]]; layerType: typedesc[GRULayer[T]]; numInputFeatures, hiddenSize, layers: int): GRULayer[T]
linear: proc init[T](ctx: Context[Tensor[T]]; layerType: typedesc[Linear[T]]; numInput, numOutput: int): Linear[T]
maxpool2D: proc init[T](ctx: Context[Tensor[T]]; layerType: typedesc[MaxPool2D[T]]; inShape: seq[int]; kernelSize, padding, stride: Size2D): MaxPool2D[T]
initForEach:
foreach_common: proc initForEach(params: NimNode; values, aliases, raw_ptrs: var NimNode; aliases_stmt, raw_ptrs_stmt: var NimNode; test_shapes: var NimNode)
initMetadataArray:
datatypes: proc initMetadataArray(len: int): Metadata
initSpanSlices:
accessors_macros_syntax: proc initSpanSlices(len: int): ArrayOfSlices
initStridedIteration:
p_accessors: template initStridedIteration(coord, backstrides, iter_pos: untyped; t, iter_offset, iter_size: typed): untyped
initTensorMetadata:
initialization: proc initTensorMetadata(result: var Tensor; size: var int; shape: Metadata; layout: static OrderType = rowMajor)
initialization: proc initTensorMetadata(result: var Tensor; size: var int; shape: openArray[int]; layout: static OrderType = rowMajor)
insert:
dynamic_stack_arrays: proc insert[T](a: var DynamicStackArray[T]; value: T; index: int = 0)
inShape:
conv2D: proc inShape[T](self: Conv2D[T]): seq[int]
embedding: proc inShape[T](self: Embedding[T]): seq[int]
flatten: proc inShape[T](self: Flatten[T]): seq[int]
gcn: proc inShape[T](self: GCNLayer[T]): seq[int]
linear: proc inShape[T](self: Linear[T]): seq[int]
maxpool2D: proc inShape[T](self: MaxPool2D[T]): seq[int]
int2bit:
special_matrices: proc int2bit(value: int; n: int; msbfirst = true): Tensor[bool]
special_matrices: proc int2bit[T: SomeInteger](t: Tensor[T]; n: int; msbfirst = true): Tensor[bool]
intersection:
algorithms: proc intersection[T](t1, t2: Tensor[T]): Tensor[T]
iqr:
aggregate: proc iqr[T](arg: Tensor[T]): float
isAllInt:
ast_utils: proc isAllInt(slice_args: NimNode): bool
isBool:
ast_utils: proc isBool(x: NimNode): bool
is_C_contiguous:
data_structure: proc is_C_contiguous(t: CudaTensor or ClTensor): bool
datatypes: proc is_C_contiguous(t: Tensor): bool
isContiguous:
data_structure: proc isContiguous(t: AnyTensor): bool
is_F_contiguous:
data_structure: proc is_F_contiguous(t: AnyTensor): bool
is_grad_needed:
autograd_common: proc is_grad_needed(v: Variable): bool
isHypervisorPresent:
cpuinfo_x86: proc isHypervisorPresent(): bool
isInt:
ast_utils: proc isInt(x: NimNode): bool
ismember:
algorithms: proc ismember[T](t1, t2: Tensor[T]): Tensor[bool]
isNaN:
operators_comparison: proc isNaN(t: Tensor[SomeFloat]): Tensor[bool]
ufunc: proc isNaN[T](t`gensym2: Tensor[T]): Tensor[T]
ufunc: proc isNaN[T](t`gensym2: Tensor[T]): Tensor[T]
isNotNaN:
operators_comparison: proc isNotNaN(t: Tensor[SomeFloat]): Tensor[bool]
isOpenArray:
ast_utils: proc isOpenArray(x: NimNode): bool
item:
initialization: proc item[T](t: Tensor[T]): T
initialization: proc item[T_IN, T_OUT](t: Tensor[T_IN]; _: typedesc[T_OUT]): T_OUT
items:
accessors: iterator items[T](t: Tensor[T]): T
accessors: iterator items[T](t: Tensor[T]; offset, size: int): T
dynamic_stack_arrays: iterator items[T](a: DynamicStackArray[T]): T
IterKind:
p_accessors: enum IterKind
Iter_Values:
p_accessors: IterKind.Iter_Values
Jaccard:
distances: object Jaccard
kaiming_normal:
init: proc kaiming_normal(shape: varargs[int]; T: type): Tensor[T]
kaiming_uniform:
init: proc kaiming_uniform(shape: varargs[int]; T: type): Tensor[T]
kde:
kde: proc kde[T: SomeNumber; U: int | Tensor[SomeNumber] | openArray[SomeNumber]]( t: Tensor[T]; kernel: static KernelFunc; kernelKind = knCustom; adjust: float = 1.0; samples: U = 1000; bw: float = NaN; normalize = false; cutoff: float = NaN; weights: Tensor[T] = newTensor[T](0)): Tensor[float]
kde: proc kde[T: SomeNumber; U: KernelKind | string; V: int | Tensor[SomeNumber] | openArray[SomeNumber]](t: Tensor[T]; kernel: U = "gauss"; adjust: float = 1.0; samples: V = 1000; bw: float = NaN; normalize = false; weights: Tensor[T] = newTensor[T](0)): Tensor[ float]
KDTree:
kdtree: type KDTree
kdTree:
kdtree: proc kdTree[T](data: Tensor[T]; leafSize = 16; copyData = true; balancedTree: static bool = true): KDTree[T]
KernelFunc:
kde: type KernelFunc
KernelKind:
kde: enum KernelKind
kmeans:
kmeans: proc kmeans[T: SomeFloat](x: Tensor[T]; n_clusters = 10; tol: float = 0.0001; n_init = 10; max_iters = 300; seed = 1000; random = false): tuple[ labels: Tensor[int], centroids: Tensor[T], inertia: T]
kmeans: proc kmeans[T: SomeFloat](x: Tensor[T]; centroids: Tensor[T]): Tensor[int]
knBox:
kde: KernelKind.knBox
knCustom:
kde: KernelKind.knCustom
knEpanechnikov:
kde: KernelKind.knEpanechnikov
knGauss:
kde: KernelKind.knGauss
KnownSupportsCopyMem:
datatypes: type KnownSupportsCopyMem
knTriangular:
kde: KernelKind.knTriangular
knTrig:
kde: KernelKind.knTrig
LASER_MAXRANK:
dynamic_stack_arrays: const LASER_MAXRANK
LASER_MEM_ALIGN:
compiler_optim_hints: const LASER_MEM_ALIGN
laswp:
auxiliary_lapack: proc laswp(a: var Tensor; pivot_indices: openArray[int32]; pivot_from: static int32)
layoutOnDevice:
cuda: proc layoutOnDevice[T: SomeFloat](t: CudaTensor[T]): CudaTensorLayout[T]
opencl_backend: proc layoutOnDevice[T: SomeFloat](t: ClTensor[T]): ClTensorLayout[T]
least_squares_solver:
least_squares: proc least_squares_solver[T: SomeFloat](a, b: Tensor[T]; rcond = -1.T): tuple[ solution: Tensor[T], residuals: Tensor[T], matrix_rank: int, singular_values: Tensor[T]]
len:
datatypes: proc len[T](t: Tensor[T]): int
letsGoDeeper:
ast_utils: template letsGoDeeper()
lgamma:
ufunc: proc lgamma[T](t`gensym24: Tensor[T]): Tensor[T]
ufunc: proc lgamma[T](t`gensym24: Tensor[T]): Tensor[T]
Linear:
linear: object Linear
linear:
linear: proc linear[TT](input, weight: Variable[TT]; bias: Variable[TT] = nil): Variable[TT]
nnp_linear: proc linear[T](input, weight: Tensor[T]; output: var Tensor[T])
nnp_linear: proc linear[T](input, weight: Tensor[T]; bias: Tensor[T]; output: var Tensor[T])
linear_backward:
nnp_linear: proc linear_backward[T](input, weight, gradOutput: Tensor[T]; gradInput, gradWeight: var Tensor[T])
nnp_linear: proc linear_backward[T](input, weight, gradOutput: Tensor[T]; gradInput, gradWeight, gradBias: var Tensor[T])
LinearGate:
linear: type LinearGate
linspace:
init_cpu: proc linspace[T: SomeNumber](start, stop: T; num: int; endpoint = true): Tensor[float]
ln:
ufunc: proc ln[T](t`gensym5: Tensor[T]): Tensor[T]
ufunc: proc ln[T](t`gensym5: Tensor[T]): Tensor[T]
ln1p:
math_ops_fusion: proc ln1p(x: float32): float32
math_ops_fusion: proc ln1p(x: float64): float64
load_imdb:
imdb: proc load_imdb(cache: static bool = true): Imdb
load_mnist:
mnist: proc load_mnist(cache: static bool = true; fashion_mnist: static bool = false): Mnist
log10:
ufunc: proc log10[T](t`gensym6: Tensor[T]): Tensor[T]
ufunc: proc log10[T](t`gensym6: Tensor[T]): Tensor[T]
log2:
ufunc: proc log2[T](t`gensym7: Tensor[T]): Tensor[T]
ufunc: proc log2[T](t`gensym7: Tensor[T]): Tensor[T]
logspace:
init_cpu: proc logspace[T: SomeNumber](start, stop: T; num: int; base = 10.0; endpoint = true): Tensor[ float]
logsumexp:
p_logsumexp: proc logsumexp[T: SomeFloat](t: Tensor[T]): T
low:
dynamic_stack_arrays: proc low(a: DynamicStackArray): int
lu_permuted:
decomposition: proc lu_permuted[T: SupportedDecomposition](a: Tensor[T]): tuple[PL, U: Tensor[T]]
m128:
simd: object m128
m128d:
simd: object m128d
m128i:
simd: object m128i
m256:
simd: object m256
m256d:
simd: object m256d
m256i:
simd: object m256i
m512:
simd: object m512
m512d:
simd: object m512d
m512i:
simd: object m512i
mabs:
math_functions: proc mabs[T](t: var Tensor[T])
makeKernel:
kde: template makeKernel(fn: untyped): untyped
makeUniversal:
ufunc: template makeUniversal(func_name: untyped; docSuffix = "")
makeUniversalLocal:
ufunc: template makeUniversalLocal(func_name: untyped)
Manhattan:
distances: object Manhattan
map:
higher_order_applymap: proc map[T; U](t: Tensor[T]; f: T -> U): Tensor[U]
map2:
higher_order_applymap: proc map2[T, U; V: KnownSupportsCopyMem](t1: Tensor[T]; f: (T, U) -> V; t2: Tensor[U]): Tensor[ V]
higher_order_applymap: proc map2[T, U; V: not KnownSupportsCopyMem](t1: Tensor[T]; f: (T, U) -> V; t2: Tensor[U]): Tensor[V]
map2_inline:
higher_order_applymap: template map2_inline[T, U](t1: Tensor[T]; t2: Tensor[U]; op: untyped): untyped
map3_inline:
higher_order_applymap: template map3_inline[T, U, V](t1: Tensor[T]; t2: Tensor[U]; t3: Tensor[V]; op: untyped): untyped
map_inline:
higher_order_applymap: template map_inline[T](t: Tensor[T]; op: untyped): untyped
masked_axis_fill:
selectors: proc masked_axis_fill[T](t: var Tensor[T]; mask: openArray[bool]; axis: int; value: T or Tensor[T])
selectors: proc masked_axis_fill[T](t: var Tensor[T]; mask: Tensor[bool]; axis: int; value: T or Tensor[T])
masked_axis_select:
selectors: proc masked_axis_select[T](t: Tensor[T]; mask: openArray[bool]; axis: int): Tensor[T]
selectors: proc masked_axis_select[T](t: Tensor[T]; mask: Tensor[bool]; axis: int): Tensor[T]
masked_fill:
selectors: proc masked_fill[T](t: var Tensor[T]; mask: openArray; value: openArray[T])
selectors: proc masked_fill[T](t: var Tensor[T]; mask: openArray; value: T)
selectors: proc masked_fill[T](t: var Tensor[T]; mask: openArray; value: Tensor[T])
selectors: proc masked_fill[T](t: var Tensor[T]; mask: Tensor[bool]; value: openArray[T])
selectors: proc masked_fill[T](t: var Tensor[T]; mask: Tensor[bool]; value: T)
selectors: proc masked_fill[T](t: var Tensor[T]; mask: Tensor[bool]; value: Tensor[T])
masked_fill_along_axis:
selectors: proc masked_fill_along_axis[T](t: var Tensor[T]; mask: Tensor[bool]; axis: int; value: T)
masked_select:
selectors: proc masked_select[T](t: Tensor[T]; mask: openArray): Tensor[T]
selectors: proc masked_select[T](t: Tensor[T]; mask: Tensor[bool]): Tensor[T]
MatMulGate:
gates_blas: type MatMulGate
MatrixKind:
linear_systems: enum MatrixKind
MatrixView:
gemm_utils: object MatrixView
max:
aggregate: proc max[T](arg: Tensor[T]): T
aggregate: proc max[T](arg: Tensor[T]; axis: int): Tensor[T]
dynamic_stack_arrays: proc max[T](a: DynamicStackArray[T]): T
math_functions: proc max[T: SomeNumber](t1, t2: Tensor[T]): Tensor[T]
math_functions: proc max[T: SomeNumber](args: varargs[Tensor[T]]): Tensor[T]
MaxPool2D:
maxpool2D: object MaxPool2D
maxpool2d:
maxpool2D: proc maxpool2d[TT](input: Variable[TT]; kernel: Size2D; padding: Size2D = (0, 0); stride: Size2D = (1, 1)): Variable[TT]
nnp_maxpooling: proc maxpool2d[T](input: Tensor[T]; kernel: Size2D; padding: Size2D = (0, 0); stride: Size2D = (1, 1)): tuple[max_indices: Tensor[int], maxpooled: Tensor[T]]
maxpool2d_backward:
nnp_maxpooling: proc maxpool2d_backward[T](cached_input_shape: openArray[int] | Metadata; cached_max_indices: Tensor[int]; gradOutput: Tensor[T]): Tensor[ T]
MaxPool2DGate:
maxpool2D: type MaxPool2DGate
MAXRANK:
global_config: const MAXRANK
mclamp:
math_functions: proc mclamp[T](t: var Tensor[T]; min, max: T)
mcopySign:
math_functions: proc mcopySign[T: SomeFloat](t1: var Tensor[T]; t2: Tensor[T])
mean:
aggregate: proc mean[T: Complex[float32] or Complex[float64]](arg: Tensor[T]): T
aggregate: proc mean[T: Complex[float32] or Complex[float64]](arg: Tensor[T]; axis: int): Tensor[ T]
aggregate: proc mean[T: SomeFloat](arg: Tensor[T]): T
aggregate: proc mean[T: SomeFloat](arg: Tensor[T]; axis: int): Tensor[T]
aggregate: proc mean[T: SomeInteger](arg: Tensor[T]): T
aggregate: proc mean[T: SomeInteger](arg: Tensor[T]; axis: int): Tensor[T]
gates_reduce: proc mean[TT](a: Variable[TT]): Variable[TT]
gates_reduce: proc mean[TT](a: Variable[TT]; axis: Natural): Variable[TT]
mean_absolute_error:
common_error_functions: proc mean_absolute_error[T: SomeFloat](y, y_true: Tensor[T] | Tensor[Complex[T]]): T
MeanGate:
gates_reduce: type MeanGate
mean_relative_error:
common_error_functions: proc mean_relative_error[T: SomeFloat](y, y_true: Tensor[T] | Tensor[Complex[T]]): T
mean_squared_error:
common_error_functions: proc mean_squared_error[T: SomeFloat](y, y_true: Tensor[T] | Tensor[Complex[T]]): T
median:
aggregate: proc median[T](arg: Tensor[T]; isSorted = false): float
melwise_div:
math_functions: proc melwise_div[T: SomeFloat](a: var Tensor[T]; b: Tensor[T])
math_functions: proc melwise_div[T: SomeInteger](a: var Tensor[T]; b: Tensor[T])
melwise_mul:
math_functions: proc melwise_mul[T](a: var Tensor[T]; b: Tensor[T])
menumerate:
accessors: iterator menumerate[T](t: Tensor[T]): (int, var T)
accessors: iterator menumerate[T](t: Tensor[T]; offset, size: int): (int, var T)
menumerateZip:
accessors: iterator menumerateZip[T, U](t1: var Tensor[T]; t2: Tensor[U]): (int, var T, U)
accessors: iterator menumerateZip[T, U](t1: var Tensor[T]; t2: Tensor[U]; offset, size: int): (int, var T, U)
meshgrid:
special_matrices: proc meshgrid[T](t_list: varargs[Tensor[T]]; indexing = MeshGridIndexing.xygrid): seq[ Tensor[T]]
MeshGridIndexing:
special_matrices: enum MeshGridIndexing
Metadata:
datatypes: type Metadata
MetadataArray:
datatypes: type MetadataArray
MicroKernel:
gemm_tiling: object MicroKernel
min:
aggregate: proc min[T](arg: Tensor[T]): T
aggregate: proc min[T](arg: Tensor[T]; axis: int): Tensor[T]
math_functions: proc min[T: SomeNumber](t1, t2: Tensor[T]): Tensor[T]
math_functions: proc min[T: SomeNumber](args: varargs[Tensor[T]]): Tensor[T]
Minkowski:
distances: object Minkowski
mitems:
accessors: iterator mitems[T](t: var Tensor[T]): var T
accessors: iterator mitems[T](t: var Tensor[T]; offset, size: int): var T
dynamic_stack_arrays: iterator mitems[T](a: var DynamicStackArray[T]): var T
mkGenBand:
linear_systems: MatrixKind.mkGenBand
mkGeneral:
linear_systems: MatrixKind.mkGeneral
mkGenTriDiag:
linear_systems: MatrixKind.mkGenTriDiag
mkPosDef:
linear_systems: MatrixKind.mkPosDef
mkPosDefBand:
linear_systems: MatrixKind.mkPosDefBand
mkPosDefTriDiag:
linear_systems: MatrixKind.mkPosDefTriDiag
mkSymmetric:
linear_systems: MatrixKind.mkSymmetric
mm256_add_epi16:
simd: proc mm256_add_epi16(a, b: m256i): m256i
mm256_add_epi32:
simd: proc mm256_add_epi32(a, b: m256i): m256i
mm256_add_epi64:
simd: proc mm256_add_epi64(a, b: m256i): m256i
mm256_add_epi8:
simd: proc mm256_add_epi8(a, b: m256i): m256i
mm256_add_pd:
simd: proc mm256_add_pd(a, b: m256d): m256d
mm256_add_ps:
simd: proc mm256_add_ps(a, b: m256): m256
mm256_and_ps:
simd: proc mm256_and_ps(a, b: m256): m256
mm256_and_si256:
simd: proc mm256_and_si256(a, b: m256i): m256i
mm256_castps256_ps128:
simd: proc mm256_castps256_ps128(a: m256): m128
mm256_castps_si256:
simd: proc mm256_castps_si256(a: m256): m256i
mm256_castsi256_ps:
simd: proc mm256_castsi256_ps(a: m256i): m256
mm256_cmpgt_epi32:
simd: proc mm256_cmpgt_epi32(a, b: m256i): m256i
mm256_cvtepi32_ps:
simd: proc mm256_cvtepi32_ps(a: m256i): m256
mm256_cvtps_epi32:
simd: proc mm256_cvtps_epi32(a: m256): m256i
mm256_extractf128_ps:
simd: proc mm256_extractf128_ps(v: m256; m: cint{lit}): m128
mm256_fmadd_pd:
simd: proc mm256_fmadd_pd(a, b, c: m256d): m256d
mm256_fmadd_ps:
simd: proc mm256_fmadd_ps(a, b, c: m256): m256
mm256_i32gather_epi32:
simd: proc mm256_i32gather_epi32(m: ptr (uint32 or int32); i: m256i; s: int32): m256i
mm256_load_pd:
simd: proc mm256_load_pd(aligned_mem_addr: ptr float64): m256d
mm256_load_ps:
simd: proc mm256_load_ps(aligned_mem_addr: ptr float32): m256
mm256_load_si256:
simd: proc mm256_load_si256(mem_addr: ptr m256i): m256i
mm256_loadu_pd:
simd: proc mm256_loadu_pd(mem_addr: ptr float64): m256d
mm256_loadu_ps:
simd: proc mm256_loadu_ps(mem_addr: ptr float32): m256
mm256_loadu_si256:
simd: proc mm256_loadu_si256(mem_addr: ptr m256i): m256i
mm256_max_ps:
simd: proc mm256_max_ps(a, b: m256): m256
mm256_min_ps:
simd: proc mm256_min_ps(a, b: m256): m256
mm256_movemask_epi8:
simd: proc mm256_movemask_epi8(a: m256i): int32
mm256_mul_epu32:
simd: proc mm256_mul_epu32(a: m256i; b: m256i): m256i
mm256_mullo_epi16:
simd: proc mm256_mullo_epi16(a, b: m256i): m256i
mm256_mullo_epi32:
simd: proc mm256_mullo_epi32(a, b: m256i): m256i
mm256_mul_pd:
simd: proc mm256_mul_pd(a, b: m256d): m256d
mm256_mul_ps:
simd: proc mm256_mul_ps(a, b: m256): m256
mm256_or_ps:
simd: proc mm256_or_ps(a, b: m256): m256
mm256_set1_epi16:
simd: proc mm256_set1_epi16(a: int16 or uint16): m256i
mm256_set1_epi32:
simd: proc mm256_set1_epi32(a: int32 or uint32): m256i
mm256_set1_epi64x:
simd: proc mm256_set1_epi64x(a: int64 or uint64): m256i
mm256_set1_epi8:
simd: proc mm256_set1_epi8(a: int8 or uint8): m256i
mm256_set1_pd:
simd: proc mm256_set1_pd(a: float64): m256d
mm256_set1_ps:
simd: proc mm256_set1_ps(a: float32): m256
mm256_setzero_pd:
simd: proc mm256_setzero_pd(): m256d
mm256_setzero_ps:
simd: proc mm256_setzero_ps(): m256
mm256_setzero_si256:
simd: proc mm256_setzero_si256(): m256i
mm256_shuffle_epi32:
simd: proc mm256_shuffle_epi32(a: m256i; imm8: cint): m256i
mm256_slli_epi32:
simd: proc mm256_slli_epi32(a: m256i; count: int32): m256i
mm256_srli_epi32:
simd: proc mm256_srli_epi32(a: m256i; count: int32): m256i
mm256_srli_epi64:
simd: proc mm256_srli_epi64(a: m256i; imm8: cint): m256i
mm256_store_pd:
simd: proc mm256_store_pd(mem_addr: ptr float64; a: m256d)
mm256_store_ps:
simd: proc mm256_store_ps(mem_addr: ptr float32; a: m256)
mm256_storeu_pd:
simd: proc mm256_storeu_pd(mem_addr: ptr float64; a: m256d)
mm256_storeu_ps:
simd: proc mm256_storeu_ps(mem_addr: ptr float32; a: m256)
mm256_storeu_si256:
simd: proc mm256_storeu_si256(mem_addr: ptr m256i; a: m256i)
mm256_sub_ps:
simd: proc mm256_sub_ps(a, b: m256): m256
mm512_add_epi16:
simd: proc mm512_add_epi16(a, b: m512i): m512i
mm512_add_epi32:
simd: proc mm512_add_epi32(a, b: m512i): m512i
mm512_add_epi64:
simd: proc mm512_add_epi64(a, b: m512i): m512i
mm512_add_epi8:
simd: proc mm512_add_epi8(a, b: m512i): m512i
mm512_add_pd:
simd: proc mm512_add_pd(a, b: m512d): m512d
mm512_add_ps:
simd: proc mm512_add_ps(a, b: m512): m512
mm512_and_si512:
simd: proc mm512_and_si512(a, b: m512i): m512i
mm512_castps_si512:
simd: proc mm512_castps_si512(a: m512): m512i
mm512_castsi512_ps:
simd: proc mm512_castsi512_ps(a: m512i): m512
mm512_cmpgt_epi32_mask:
simd: proc mm512_cmpgt_epi32_mask(a, b: m512i): mmask16
mm512_cvtepi32_ps:
simd: proc mm512_cvtepi32_ps(a: m512i): m512
mm512_cvtps_epi32:
simd: proc mm512_cvtps_epi32(a: m512): m512i
mm512_fmadd_pd:
simd: proc mm512_fmadd_pd(a, b, c: m512d): m512d
mm512_fmadd_ps:
simd: proc mm512_fmadd_ps(a, b, c: m512): m512
mm512_i32gather_epi32:
simd: proc mm512_i32gather_epi32(i: m512i; m: ptr (uint32 or int32); s: int32): m512i
mm512_load_pd:
simd: proc mm512_load_pd(aligned_mem_addr: ptr float64): m512d
mm512_load_ps:
simd: proc mm512_load_ps(aligned_mem_addr: ptr float32): m512
mm512_load_si512:
simd: proc mm512_load_si512(mem_addr: ptr SomeInteger): m512i
mm512_loadu_pd:
simd: proc mm512_loadu_pd(mem_addr: ptr float64): m512d
mm512_loadu_ps:
simd: proc mm512_loadu_ps(mem_addr: ptr float32): m512
mm512_loadu_si512:
simd: proc mm512_loadu_si512(mem_addr: ptr SomeInteger): m512i
mm512_maskz_set1_epi32:
simd: proc mm512_maskz_set1_epi32(k: mmask16; a: cint): m512i
mm512_max_ps:
simd: proc mm512_max_ps(a, b: m512): m512
mm512_min_ps:
simd: proc mm512_min_ps(a, b: m512): m512
mm512_movepi8_mask:
simd: proc mm512_movepi8_mask(a: m512i): mmask64
mm512_movm_epi32:
simd: proc mm512_movm_epi32(a: mmask16): m512i
mm512_mullo_epi32:
simd: proc mm512_mullo_epi32(a, b: m512i): m512i
mm512_mullo_epi64:
simd: proc mm512_mullo_epi64(a, b: m512i): m512i
mm512_mul_pd:
simd: proc mm512_mul_pd(a, b: m512d): m512d
mm512_mul_ps:
simd: proc mm512_mul_ps(a, b: m512): m512
mm512_or_ps:
simd: proc mm512_or_ps(a, b: m512): m512
mm512_set1_epi16:
simd: proc mm512_set1_epi16(a: int16 or uint16): m512i
mm512_set1_epi32:
simd: proc mm512_set1_epi32(a: int32 or uint32): m512i
mm512_set1_epi64:
simd: proc mm512_set1_epi64(a: int64 or uint64): m512i
mm512_set1_epi8:
simd: proc mm512_set1_epi8(a: int8 or uint8): m512i
mm512_set1_pd:
simd: proc mm512_set1_pd(a: float64): m512d
mm512_set1_ps:
simd: proc mm512_set1_ps(a: float32): m512
mm512_setzero_pd:
simd: proc mm512_setzero_pd(): m512d
mm512_setzero_ps:
simd: proc mm512_setzero_ps(): m512
mm512_setzero_si512:
simd: proc mm512_setzero_si512(): m512i
mm512_slli_epi32:
simd: proc mm512_slli_epi32(a: m512i; count: int32): m512i
mm512_srli_epi32:
simd: proc mm512_srli_epi32(a: m512i; count: int32): m512i
mm512_store_pd:
simd: proc mm512_store_pd(mem_addr: ptr float64; a: m512d)
mm512_store_ps:
simd: proc mm512_store_ps(mem_addr: ptr float32; a: m512)
mm512_storeu_pd:
simd: proc mm512_storeu_pd(mem_addr: ptr float64; a: m512d)
mm512_storeu_ps:
simd: proc mm512_storeu_ps(mem_addr: ptr float32; a: m512)
mm512_storeu_si512:
simd: proc mm512_storeu_si512(mem_addr: ptr SomeInteger; a: m512i)
mm512_sub_ps:
simd: proc mm512_sub_ps(a, b: m512): m512
mm_add_epi16:
simd: proc mm_add_epi16(a, b: m128i): m128i
mm_add_epi32:
simd: proc mm_add_epi32(a, b: m128i): m128i
mm_add_epi64:
simd: proc mm_add_epi64(a, b: m128i): m128i
mm_add_epi8:
simd: proc mm_add_epi8(a, b: m128i): m128i
mm_add_pd:
simd: proc mm_add_pd(a, b: m128d): m128d
mm_add_ps:
simd: proc mm_add_ps(a, b: m128): m128
mm_add_ss:
simd: proc mm_add_ss(a, b: m128): m128
mm_and_si128:
simd: proc mm_and_si128(a, b: m128i): m128i
mmask16:
simd: type mmask16
mmask64:
simd: type mmask64
mmax:
math_functions: proc mmax[T: SomeNumber](t1: var Tensor[T]; t2: Tensor[T])
math_functions: proc mmax[T: SomeNumber](t1: var Tensor[T]; args: varargs[Tensor[T]])
mm_castps_si128:
simd: proc mm_castps_si128(a: m128): m128i
mm_castsi128_ps:
simd: proc mm_castsi128_ps(a: m128i): m128
mm_cmpgt_epi32:
simd: proc mm_cmpgt_epi32(a, b: m128i): m128i
mm_cvtepi32_ps:
simd: proc mm_cvtepi32_ps(a: m128i): m128
mm_cvtps_epi32:
simd: proc mm_cvtps_epi32(a: m128): m128i
mm_cvtsi128_si32:
simd: proc mm_cvtsi128_si32(a: m128i): cint
mm_cvtss_f32:
simd: proc mm_cvtss_f32(a: m128): float32
mm_extract_epi16:
simd: proc mm_extract_epi16(a: m128i; imm8: cint): cint
mm_i32gather_epi32:
simd: proc mm_i32gather_epi32(m: ptr (uint32 or int32); i: m128i; s: int32): m128i
mmin:
math_functions: proc mmin[T: SomeNumber](t1: var Tensor[T]; t2: Tensor[T])
math_functions: proc mmin[T: SomeNumber](t1: var Tensor[T]; args: varargs[Tensor[T]])
mm_load_pd:
simd: proc mm_load_pd(aligned_mem_addr: ptr float64): m128d
mm_load_ps:
simd: proc mm_load_ps(aligned_mem_addr: ptr float32): m128
mm_load_si128:
simd: proc mm_load_si128(mem_addr: ptr m128i): m128i
mm_load_ss:
simd: proc mm_load_ss(aligned_mem_addr: ptr float32): m128
mm_loadu_pd:
simd: proc mm_loadu_pd(mem_addr: ptr float64): m128d
mm_loadu_ps:
simd: proc mm_loadu_ps(data: ptr float32): m128
mm_loadu_si128:
simd: proc mm_loadu_si128(mem_addr: ptr m128i): m128i
mm_max_ps:
simd: proc mm_max_ps(a, b: m128): m128
mm_max_ss:
simd: proc mm_max_ss(a, b: m128): m128
mm_min_ps:
simd: proc mm_min_ps(a, b: m128): m128
mm_min_ss:
simd: proc mm_min_ss(a, b: m128): m128
mm_movehdup_ps:
simd: proc mm_movehdup_ps(a: m128): m128
mm_movehl_ps:
simd: proc mm_movehl_ps(a, b: m128): m128
mm_moveldup_ps:
simd: proc mm_moveldup_ps(a: m128): m128
mm_movelh_ps:
simd: proc mm_movelh_ps(a, b: m128): m128
mm_movemask_epi8:
simd: proc mm_movemask_epi8(a: m128i): int32
mm_mul_epu32:
simd: proc mm_mul_epu32(a: m128i; b: m128i): m128i
mm_mullo_epi16:
simd: proc mm_mullo_epi16(a, b: m128i): m128i
mm_mullo_epi32:
simd: proc mm_mullo_epi32(a, b: m128i): m128i
mm_mul_pd:
simd: proc mm_mul_pd(a, b: m128d): m128d
mm_mul_ps:
simd: proc mm_mul_ps(a, b: m128): m128
mm_or_ps:
simd: proc mm_or_ps(a, b: m128): m128
mm_or_si128:
simd: proc mm_or_si128(a, b: m128i): m128i
mm_set1_epi16:
simd: proc mm_set1_epi16(a: int16 or uint16): m128i
mm_set1_epi32:
simd: proc mm_set1_epi32(a: int32 or uint32): m128i
mm_set1_epi64x:
simd: proc mm_set1_epi64x(a: int64 or uint64): m128i
mm_set1_epi8:
simd: proc mm_set1_epi8(a: int8 or uint8): m128i
mm_set1_pd:
simd: proc mm_set1_pd(a: float64): m128d
mm_set1_ps:
simd: proc mm_set1_ps(a: float32): m128
mm_set_epi32:
simd: proc mm_set_epi32(e3, e2, e1, e0: cint): m128i
mm_setzero_pd:
simd: proc mm_setzero_pd(): m128d
mm_setzero_ps:
simd: proc mm_setzero_ps(): m128
mm_setzero_si128:
simd: proc mm_setzero_si128(): m128i
mm_shuffle_epi32:
simd: proc mm_shuffle_epi32(a: m128i; imm8: cint): m128i
mm_slli_epi32:
simd: proc mm_slli_epi32(a: m128i; count: int32): m128i
mm_slli_epi64:
simd: proc mm_slli_epi64(a: m128i; imm8: cint): m128i
mm_srli_epi32:
simd: proc mm_srli_epi32(a: m128i; count: int32): m128i
mm_srli_epi64:
simd: proc mm_srli_epi64(a: m128i; imm8: cint): m128i
mm_store_pd:
simd: proc mm_store_pd(mem_addr: ptr float64; a: m128d)
mm_store_ps:
simd: proc mm_store_ps(mem_addr: ptr float32; a: m128)
mm_storeu_pd:
simd: proc mm_storeu_pd(mem_addr: ptr float64; a: m128d)
mm_storeu_ps:
simd: proc mm_storeu_ps(mem_addr: ptr float32; a: m128)
mm_storeu_si128:
simd: proc mm_storeu_si128(mem_addr: ptr m128i; a: m128i)
mm_sub_pd:
simd: proc mm_sub_pd(a, b: m128d): m128d
mm_sub_ps:
simd: proc mm_sub_ps(a, b: m128): m128
mnegate:
math_functions: proc mnegate[T: SomeSignedInt | SomeFloat](t: var Tensor[T])
moveaxis:
shapeshifting: proc moveaxis(t: Tensor; initial: Natural; target: Natural): Tensor
mpairs:
accessors: iterator mpairs[T](t: var Tensor[T]): (seq[int], var T)
dynamic_stack_arrays: iterator mpairs[T](a: var DynamicStackArray[T]): (int, var T)
mreciprocal:
math_functions: proc mreciprocal[T: Complex[float32] or Complex[float64]](t: var Tensor[T])
math_functions: proc mreciprocal[T: SomeFloat](t: var Tensor[T])
mrelu:
nnp_activation: proc mrelu[T](t: var Tensor[T])
MSELoss:
mean_square_error_loss: type MSELoss
mse_loss:
mean_square_error_loss: proc mse_loss[TT](input: Variable[TT]; target: TT): Variable[TT]
msigmoid:
nnp_activation: proc msigmoid[T: SomeFloat](t: var Tensor[T])
mtanh:
nnp_activation: proc mtanh[T: SomeFloat](t: var Tensor[T])
mzip:
accessors: iterator mzip[T, U](t1: var Tensor[T]; t2: Tensor[U]): (var T, U)
accessors: iterator mzip[T, U](t1: var Tensor[T]; t2: Tensor[U]; offset, size: int): (var T, U)
accessors: iterator mzip[T, U, V](t1: var Tensor[T]; t2: Tensor[U]; t3: Tensor[V]): (var T, U, V)
accessors: iterator mzip[T, U, V](t1: var Tensor[T]; t2: Tensor[U]; t3: Tensor[V]; offset, size: int): ( var T, U, V)
naive_gemv_fallback:
naive_l2_gemv: proc naive_gemv_fallback[T: SomeInteger](alpha: T; A: Tensor[T]; x: Tensor[T]; beta: T; y: var Tensor[T])
nchw_channels:
p_nnp_types: proc nchw_channels[T](input: Tensor[T]): int
nchw_height:
p_nnp_types: proc nchw_height[T](input: Tensor[T]): int
nchw_width:
p_nnp_types: proc nchw_width[T](input: Tensor[T]): int
nearestNeighbors:
neighbors: proc nearestNeighbors[T](X: Tensor[T]; eps: float; metric: typedesc[AnyMetric]; p = 2.0; useNaiveNearestNeighbor: static bool = false): seq[ Tensor[int]]
negate:
math_functions: proc negate[T: SomeSignedInt | SomeFloat](t: Tensor[T]): Tensor[T]
network:
nn_dsl: macro network(modelName: untyped; config: untyped): untyped
newClStorage:
opencl_backend: proc newClStorage[T: SomeFloat](length: int): ClStorage[T]
newClTensor:
p_init_opencl: proc newClTensor[T: SomeFloat](shape: Metadata; layout: OrderType = rowMajor): ClTensor[ T]
p_init_opencl: proc newClTensor[T: SomeFloat](shape: varargs[int]; layout: OrderType = rowMajor): ClTensor[ T]
newContext:
autograd_common: proc newContext(TT: typedesc): Context[TT]
newConv2dDesc:
cudnn_conv_interface: proc newConv2dDesc[T: SomeFloat](padding, strides, dilation: SizeHW): cudnnConvolutionDescriptor_t
newConvAlgoSpace:
cudnn_conv_interface: proc newConvAlgoSpace[T: SomeFloat](srcTensorDesc: cudnnTensorDescriptor_t; kernelDesc: cudnnFilterDescriptor_t; convDesc: cudnnConvolutionDescriptor_t; dstTensorDesc: cudnnTensorDescriptor_t): ConvAlgoSpace[ T, cudnnConvolutionFwdAlgo_t]
newCudaStorage:
cuda: proc newCudaStorage[T: SomeFloat](length: int): CudaStorage[T]
newCudaTensor:
p_init_cuda: proc newCudaTensor[T: SomeFloat](shape: Metadata; layout: OrderType = colMajor): CudaTensor[ T]
p_init_cuda: proc newCudaTensor[T: SomeFloat](shape: varargs[int]; layout: OrderType = colMajor): CudaTensor[ T]
newCudnn4DTensorDesc:
cudnn: proc newCudnn4DTensorDesc[T: SomeFloat](t: CudaTensor[T]): cudnnTensorDescriptor_t
newCudnnConvKernelDesc:
cudnn_conv_interface: proc newCudnnConvKernelDesc[T: SomeFloat](convKernel: CudaTensor[T]): cudnnFilterDescriptor_t
newDiffs:
autograd_common: proc newDiffs[TT](num: Natural): SmallDiffs[TT]
newMatrixUninitColMajor:
init_colmajor: proc newMatrixUninitColMajor[T](M: var Tensor[T]; rows, cols: int)
newParents:
autograd_common: proc newParents[TT](num: Natural): Parents[TT]
newSeqUninit:
sequninit: proc newSeqUninit[T](len: Natural): seq[T]
newSGD:
optimizers: proc newSGD[T](params: varargs[Variable[Tensor[T]]]; learning_rate: T): SGD[Tensor[T]]
newTensor:
initialization: proc newTensor[T](shape: Metadata): Tensor[T]
initialization: proc newTensor[T](shape: varargs[int] = [0]): Tensor[T]
newTensorUninit:
init_cpu: proc newTensorUninit[T](size: int): Tensor[T]
init_cpu: proc newTensorUninit[T](shape: Metadata): Tensor[T]
init_cpu: proc newTensorUninit[T](shape: varargs[int] = [0]): Tensor[T]
newTensorWith:
init_cpu: proc newTensorWith[T](shape: Metadata; value: T): Tensor[T]
init_cpu: proc newTensorWith[T](shape: varargs[int]; value: T): Tensor[T]
newTiles:
gemm_tiling: proc newTiles(ukernel: static MicroKernel; T: typedesc; M, N, K: Natural): Tiles[T]
NNPackAuto:
nnp_convolution: Conv2DAlgorithm.NNPackAuto
nnpack_conv2d:
nnpack_interface: proc nnpack_conv2d(input, weight, bias: Tensor[float32]; padding, stride: Size2D): Tensor[ float32]
nnpack_conv2d_gradient:
nnpack_interface: proc nnpack_conv2d_gradient[T](input, weight: Tensor[float32]; padding, stride: Size2D; grad_output: Tensor[T]; grad_input, grad_weight: var Tensor[T])
nnp_activation:
nnpack: enum nnp_activation
nnp_convolution_algorithm:
nnpack: enum nnp_convolution_algorithm
nnp_convolution_inference:
nnpack: proc nnp_convolution_inference(algorithm: nnp_convolution_algorithm; transform_strategy: nnp_convolution_transform_strategy; input_channels: csize_t; output_channels: csize_t; input_size: nnp_size; input_padding: nnp_padding; kernel_size: nnp_size; output_subsampling: nnp_size; input: ptr cfloat; kernel: ptr cfloat; bias: ptr cfloat; output: ptr cfloat; workspace_buffer: pointer; workspace_size: ptr csize_t; activation: nnp_activation; activation_parameters: pointer; threadpool: pthreadpool_t; profile: ptr nnp_profile): nnp_status
nnp_convolution_input_gradient:
nnpack: proc nnp_convolution_input_gradient(algorithm: nnp_convolution_algorithm; batch_size: csize_t; input_channels: csize_t; output_channels: csize_t; input_size: nnp_size; input_padding: nnp_padding; kernel_size: nnp_size; grad_output: ptr cfloat; kernel: ptr cfloat; grad_input: ptr cfloat; workspace_buffer: pointer = nil; workspace_size: ptr csize_t = nil; activation: nnp_activation = nnp_activation_identity; activation_parameters: pointer = nil; threadpool: pthreadpool_t = nil; profile: ptr nnp_profile = nil): nnp_status
nnp_convolution_kernel_gradient:
nnpack: proc nnp_convolution_kernel_gradient(algorithm: nnp_convolution_algorithm; batch_size: csize_t; input_channels: csize_t; output_channels: csize_t; input_size: nnp_size; input_padding: nnp_padding; kernel_size: nnp_size; input: ptr cfloat; grad_output: ptr cfloat; grad_kernel: ptr cfloat; workspace_buffer: pointer = nil; workspace_size: ptr csize_t = nil; activation: nnp_activation = nnp_activation_identity; activation_parameters: pointer = nil; threadpool: pthreadpool_t = nil; profile: ptr nnp_profile = nil): nnp_status
nnp_convolution_output:
nnpack: proc nnp_convolution_output(algorithm: nnp_convolution_algorithm; batch_size: csize_t; input_channels: csize_t; output_channels: csize_t; input_size: nnp_size; input_padding: nnp_padding; kernel_size: nnp_size; input: ptr cfloat; kernel: ptr cfloat; bias: ptr cfloat; output: ptr cfloat; workspace_buffer: pointer = nil; workspace_size: ptr csize_t = nil; activation: nnp_activation = nnp_activation_identity; activation_parameters: pointer = nil; threadpool: pthreadpool_t = nil; profile: ptr nnp_profile = nil): nnp_status
nnp_convolution_transform_strategy:
nnpack: enum nnp_convolution_transform_strategy
nnp_convolution_transform_strategy_block_based:
nnpack: const nnp_convolution_transform_strategy_block_based
nnp_convolution_transform_strategy_tuple_based:
nnpack: const nnp_convolution_transform_strategy_tuple_based
nnp_deinitialize:
nnpack: proc nnp_deinitialize(): nnp_status
nnp_fully_connected_inference:
nnpack: proc nnp_fully_connected_inference(input_channels: csize_t; output_channels: csize_t; input: ptr cfloat; kernel: ptr cfloat; output: ptr cfloat; threadpool: pthreadpool_t): nnp_status
nnp_fully_connected_inference_f16f32:
nnpack: proc nnp_fully_connected_inference_f16f32(input_channels: csize_t; output_channels: csize_t; input: ptr cfloat; kernel: pointer; output: ptr cfloat; threadpool: pthreadpool_t): nnp_status
nnp_fully_connected_output:
nnpack: proc nnp_fully_connected_output(batch_size: csize_t; input_channels: csize_t; output_channels: csize_t; input: ptr cfloat; kernel: ptr cfloat; output: ptr cfloat; threadpool: pthreadpool_t; profile: ptr nnp_profile): nnp_status
nnp_initialize:
nnpack: proc nnp_initialize(): nnp_status
nnp_max_pooling_output:
nnpack: proc nnp_max_pooling_output(batch_size: csize_t; channels: csize_t; input_size: nnp_size; input_padding: nnp_padding; pooling_size: nnp_size; pooling_stride: nnp_size; input: ptr cfloat; output: ptr cfloat; threadpool: pthreadpool_t): nnp_status
nnp_padding:
nnpack: object nnp_padding
nnp_profile:
nnpack: object nnp_profile
nnp_relu_input_gradient:
nnpack: proc nnp_relu_input_gradient(batch_size: csize_t; channels: csize_t; grad_output: ptr cfloat; input: ptr cfloat; grad_input: ptr cfloat; negative_slope: cfloat; threadpool: pthreadpool_t): nnp_status
nnp_relu_output:
nnpack: proc nnp_relu_output(batch_size: csize_t; channels: csize_t; input: ptr cfloat; output: ptr cfloat; negative_slope: cfloat; threadpool: pthreadpool_t): nnp_status
nnp_size:
nnpack: object nnp_size
nnp_softmax_output:
nnpack: proc nnp_softmax_output(batch_size: csize_t; channels: csize_t; input: ptr cfloat; output: ptr cfloat; threadpool: pthreadpool_t): nnp_status
nnp_status:
nnpack: enum nnp_status
nnp_status_invalid_activation:
nnpack: const nnp_status_invalid_activation
nnp_status_invalid_activation_parameters:
nnpack: const nnp_status_invalid_activation_parameters
nnp_status_invalid_output_subsampling:
nnpack: const nnp_status_invalid_output_subsampling
Node:
kdtree: type Node
no_grad_mode:
autograd_common: template no_grad_mode(ctx: Context; body: untyped): untyped
nonzero:
aggregate: proc nonzero[T](arg: Tensor[T]): Tensor[int]
numberOne:
p_complex: template numberOne(T: type Complex[float32]): Complex[float32]
p_complex: template numberOne(T: type Complex[float64]): Complex[float64]
p_complex: template numberOne(T: type SomeNumber): SomeNumber
numerical_gradient:
nnp_numerical_gradient: proc numerical_gradient[T: not Tensor](input: T; f: (proc (x: T): T); h = T(0.00001)): T
nnp_numerical_gradient: proc numerical_gradient[T](input: Tensor[T]; f: (proc (x: Tensor[T]): T); h = T(0.00001)): Tensor[T]
Offset_Values:
p_accessors: IterKind.Offset_Values
omp_barrier:
openmp: template omp_barrier(): untyped
omp_chunks:
openmp: template omp_chunks(omp_size: Natural; chunk_offset, chunk_size: untyped; body: untyped): untyped
omp_critical:
openmp: template omp_critical(body: untyped): untyped
omp_flush:
openmp: macro omp_flush(variables: varargs[untyped]): untyped
omp_for:
openmp: template omp_for(index: untyped; length: Natural; use_simd, nowait: static bool; body: untyped)
OMP_FOR_THRESHOLD:
global_config: const OMP_FOR_THRESHOLD
omp_get_max_threads:
openmp: template omp_get_max_threads(): cint
omp_get_nested:
openmp: template omp_get_nested(): cint
omp_get_num_threads:
openmp: template omp_get_num_threads(): cint
omp_get_thread_num:
openmp: template omp_get_thread_num(): cint
omp_master:
openmp: template omp_master(body: untyped): untyped
OMP_MAX_REDUCE_BLOCKS:
global_config: const OMP_MAX_REDUCE_BLOCKS
OMP_MEMORY_BOUND_GRAIN_SIZE:
openmp: const OMP_MEMORY_BOUND_GRAIN_SIZE
OMP_NON_CONTIGUOUS_SCALE_FACTOR:
openmp: const OMP_NON_CONTIGUOUS_SCALE_FACTOR
omp_parallel:
openmp: template omp_parallel(body: untyped): untyped
omp_parallel_chunks:
openmp: template omp_parallel_chunks(length: Natural; chunk_offset, chunk_size: untyped; omp_grain_size: static Natural; body: untyped): untyped
omp_parallel_chunks_default:
openmp: template omp_parallel_chunks_default(length: Natural; chunk_offset, chunk_size: untyped; body: untyped): untyped
omp_parallel_for:
openmp: template omp_parallel_for(index: untyped; length: Natural; omp_grain_size: static Natural; use_simd: static bool; body: untyped)
omp_parallel_for_default:
openmp: template omp_parallel_for_default(index: untyped; length: Natural; body: untyped)
omp_parallel_if:
openmp: template omp_parallel_if(condition: bool; body: untyped)
omp_set_nested:
openmp: template omp_set_nested(x: cint)
omp_set_num_threads:
openmp: template omp_set_num_threads(x: cint)
omp_single:
openmp: template omp_single(body: untyped): untyped
omp_single_nowait:
openmp: template omp_single_nowait(body: untyped): untyped
omp_suffix:
openmp: proc omp_suffix(genNew: static bool = false): string
omp_task:
openmp: template omp_task(annotation: static string; body: untyped): untyped
omp_taskloop:
openmp: template omp_taskloop(index: untyped; length: Natural; annotation: static string; body: untyped)
omp_taskwait:
openmp: template omp_taskwait(): untyped
ones:
init_cpu: proc ones[T: SomeNumber | Complex[float32] | Complex[float64] | bool](shape: Metadata): Tensor[ T]
init_cpu: proc ones[T: SomeNumber | Complex[float32] | Complex[float64] | bool]( shape: varargs[int]): Tensor[T]
ones_like:
init_cpu: proc ones_like[T: SomeNumber | Complex[float32] | Complex[float64] | bool]( t: Tensor[T]): Tensor[T]
init_cuda: proc ones_like[T: SomeFloat](t: CudaTensor[T]): CudaTensor[T]
init_opencl: proc ones_like[T: SomeFloat](t: ClTensor[T]): ClTensor[T]
opencl:
init_opencl: proc opencl[T: SomeFloat](t: Tensor[T]): ClTensor[T]
Optimizer:
optimizers: type Optimizer
optimizer:
optimizers: proc optimizer[M, T](model: M; OptimizerKind: typedesc[Adam]; learning_rate: T = T(0.001); beta1: T = T(0.9); beta2: T = T(0.999); eps: T = T(1e-8)): Adam[Tensor[T]]
optimizers: proc optimizer[M, T](model: M; OptimizerKind: typedesc[SGD]; learning_rate: T): SGD[ Tensor[T]]
optimizers: proc optimizer[M, T](model: M; OptimizerKind: typedesc[SGDMomentum]; learning_rate: T; momentum: T = T(0.0); decay: T = T(0.0); nesterov = false): SGDMomentum[Tensor[T]]
optimizerAdam:
optimizers: proc optimizerAdam[M, T](model: M; learning_rate: T; beta1: T = T(0.9); beta2: T = T(0.999); eps: T = T(1e-8)): Adam[Tensor[T]]
optimizerSGD:
optimizers: proc optimizerSGD[M, T](model: M; learning_rate: T): SGD[Tensor[T]]
optimizerSGDMomentum:
optimizers: proc optimizerSGDMomentum[M, T](model: M; learning_rate: T; momentum = T(0.0); decay = T(0.0); nesterov = false): SGDMomentum[ Tensor[T]]
orgqr:
auxiliary_lapack: proc orgqr[T: SomeFloat](rv_q: var Tensor[T]; tau: openArray[T]; scratchspace: var seq[T])
ormqr:
auxiliary_lapack: proc ormqr[T: SomeFloat](C: var Tensor[T]; Q: Tensor[T]; tau: openArray[T]; side, trans: static char; scratchspace: var seq[T])
outShape:
conv2D: proc outShape[T](self: Conv2D[T]): seq[int]
embedding: proc outShape[T](self: Embedding[T]): seq[int]
flatten: proc outShape[T](self: Flatten[T]): seq[int]
gcn: proc outShape[T](self: GCNLayer[T]): seq[int]
linear: proc outShape[T](self: Linear[T]): seq[int]
maxpool2D: proc outShape[T](self: MaxPool2D[T]): seq[int]
overload:
overload: macro overload(overloaded_name: untyped; lapack_name: typed{nkSym}): untyped
pack_A_mc_kc:
gemm_packing: proc pack_A_mc_kc[T; ukernel: static MicroKernel](packedA: ptr UncheckedArray[T]; mc, kc: int; A: MatrixView[T])
pack_B_kc_nc:
gemm_packing: proc pack_B_kc_nc[T; ukernel: static MicroKernel](packedB: ptr UncheckedArray[T]; kc, nc: int; B: MatrixView[T])
pairs:
accessors: iterator pairs[T](t: Tensor[T]): (seq[int], T)
dynamic_stack_arrays: iterator pairs[T](a: DynamicStackArray[T]): (int, T)
pairwiseDistances:
distances: proc pairwiseDistances(metric: typedesc[AnyMetric]; x, y: Tensor[float]; p = 2.0; squared: static bool = false): Tensor[float]
partitionMNK:
gemm_tiling: proc partitionMNK(ukernel: static MicroKernel; T: typedesc; M, N, K: Natural): tuple[ mc, nc, kc: int]
Payload:
autograd_common: object Payload
PayloadKind:
autograd_common: enum PayloadKind
pca:
pca: proc pca[T: SomeFloat](X: Tensor[T]; n_components = 2; center: static bool = true; n_oversamples = 5; n_power_iters = 2): tuple[ projected: Tensor[T], components: Tensor[T]]
PCA_Detailed:
pca: object PCA_Detailed
pca_detailed:
pca: proc pca_detailed[T: SomeFloat](X: Tensor[T]; n_components = 2; center: static bool = true; n_oversamples = 5; n_power_iters = 2): PCA_Detailed[T]
percentile:
aggregate: proc percentile[T](arg: Tensor[T]; p: int; isSorted = false): float
permute:
shapeshifting: proc permute(t: Tensor; dims: varargs[int]): Tensor
permuteImpl:
p_shapeshifting: proc permuteImpl[T](result: var Tensor[T]; dims: varargs[int])
phase:
math_functions: proc phase(t: Tensor[Complex[float32]]): Tensor[float32]
math_functions: proc phase(t: Tensor[Complex[float64]]): Tensor[float64]
pinv:
algebra: proc pinv[T: Complex32 | Complex64](A: Tensor[T]; rcond = 1e-15): Tensor[T]
algebra: proc pinv[T: SomeFloat](A: Tensor[T]; rcond = 1e-15): Tensor[T]
pkSeq:
autograd_common: PayloadKind.pkSeq
pkVar:
autograd_common: PayloadKind.pkVar
pop:
ast_utils: proc pop(tree: var NimNode): NimNode
prefetch:
compiler_optim_hints: template prefetch[T](data: ptr (T or UncheckedArray[T]); rw: static PrefetchRW = Read; locality: static PrefetchLocality = HighTemporalLocality)
PrefetchLocality:
compiler_optim_hints: enum PrefetchLocality
PrefetchRW:
compiler_optim_hints: enum PrefetchRW
pretty:
display: proc pretty[T](t: Tensor[T]; precision = -1): string
display_cuda: proc pretty[T](t: CudaTensor[T]; precision = -1): string
prettyImpl:
p_display: proc prettyImpl[T](t: Tensor[T]; inputRank = 0; alignBy = 0; alignSpacing = 4; precision = -1): string
product:
aggregate: proc product[T](arg: Tensor[T]): T
aggregate: proc product[T](arg: Tensor[T]; axis: int): Tensor[T]
dynamic_stack_arrays: proc product[T: SomeNumber](a: DynamicStackArray[T]): T
pthreadpool_t:
nnpack: type pthreadpool_t
qr:
decomposition: proc qr[T: SupportedDecomposition](a: Tensor[T]): tuple[Q, R: Tensor[T]]
query:
kdtree: proc query[T](tree: KDTree[T]; x: Tensor[T]; k = 1; eps = 0.0; metric: typedesc[AnyMetric] = Euclidean; p = 2.0; distanceUpperBound = Inf): tuple[dist: Tensor[T], idx: Tensor[int]]
query_ball_point:
kdtree: proc query_ball_point[T](tree: KDTree[T]; x: Tensor[T]; radius: float; eps = 0.0; metric: typedesc[AnyMetric] = Euclidean; p = 2.0): tuple[ dist: Tensor[T], idx: Tensor[int]]
radToDeg:
ufunc: proc radToDeg[T](t`gensym30: Tensor[T]): Tensor[T]
ufunc: proc radToDeg[T](t`gensym30: Tensor[T]): Tensor[T]
randomNormalTensor:
init_cpu: proc randomNormalTensor[T: SomeFloat](shape: varargs[int]; mean: T = 0; std: T = 1): Tensor[ T]
randomTensor:
init_cpu: proc randomTensor[T: bool](shape: varargs[int]): Tensor[T]
init_cpu: proc randomTensor(shape: varargs[int]; max: int): Tensor[int]
init_cpu: proc randomTensor[T](shape: varargs[int]; sample_source: openArray[T]): Tensor[T]
init_cpu: proc randomTensor[T](shape: varargs[int]; slice: Slice[T]): Tensor[T]
init_cpu: proc randomTensor[T: SomeFloat](shape: varargs[int]; max: T): Tensor[T]
rank:
data_structure: proc rank[T](t: CudaTensor[T] or ClTensor[T]): range[0 .. LASER_MAXRANK]
datatypes: proc rank[T](t: Tensor[T]): range[0 .. LASER_MAXRANK]
raw_data_unaligned:
datatypes: macro raw_data_unaligned(body: untyped): untyped
RawImmutableView:
datatypes: type RawImmutableView
RawMutableView:
datatypes: type RawMutableView
read_csv:
io_csv: proc read_csv[T: SomeNumber | bool | string](csvPath: string; skipHeader = false; separator = ','; quote = '\"'): Tensor[T]
readFloat32BE:
io_stream_readers: proc readFloat32BE(stream: Stream): float32
readFloat32LE:
io_stream_readers: proc readFloat32LE(stream: Stream): float32
readFloat64BE:
io_stream_readers: proc readFloat64BE(stream: Stream): float64
readFloat64LE:
io_stream_readers: proc readFloat64LE(stream: Stream): float64
read_hdf5:
io_hdf5: proc read_hdf5[T: SomeNumber](h5f: var H5FileObj; name, group: Option[string]; number: Option[int]): Tensor[T]
io_hdf5: proc read_hdf5[T: SomeNumber](h5f: var H5FileObj; name, group = ""; number = -1): Tensor[ T]
io_hdf5: proc read_hdf5[T: SomeNumber](hdf5Path: string; name, group = ""; number = -1): Tensor[ T]
read_image:
io_image: proc read_image(buffer: seq[byte]): Tensor[uint8]
io_image: proc read_image(filepath: string): Tensor[uint8]
readInt32BE:
io_stream_readers: proc readInt32BE(stream: Stream): int32
readInt32LE:
io_stream_readers: proc readInt32LE(stream: Stream): int32
readInt64BE:
io_stream_readers: proc readInt64BE(stream: Stream): int64
readInt64LE:
io_stream_readers: proc readInt64LE(stream: Stream): int64
read_mnist_images:
mnist: proc read_mnist_images(imgsPath: string): Tensor[uint8]
read_mnist_labels:
mnist: proc read_mnist_labels(stream: Stream): Tensor[uint8]
mnist: proc read_mnist_labels(labelsPath: string): Tensor[uint8]
read_npy:
io_npy: proc read_npy[T: SomeNumber](npyPath: string): Tensor[T]
readUInt16LE:
io_stream_readers: proc readUInt16LE(stream: Stream): uint16
readUInt32BE:
io_stream_readers: proc readUInt32BE(stream: Stream): uint32
readUInt32LE:
io_stream_readers: proc readUInt32LE(stream: Stream): uint32
readUInt64BE:
io_stream_readers: proc readUInt64BE(stream: Stream): uint64
readUInt64LE:
io_stream_readers: proc readUInt64LE(stream: Stream): uint64
real:
complex: proc real[T: SomeFloat](t: Tensor[Complex[T]]): Tensor[T]
real=:
complex: proc real=[T: SomeFloat](t: var Tensor[Complex[T]]; val: T)
complex: proc real=[T: SomeFloat](t: var Tensor[Complex[T]]; val: Tensor[T])
reciprocal:
math_functions: proc reciprocal[T: Complex[float32] or Complex[float64]](t: Tensor[T]): Tensor[T]
math_functions: proc reciprocal[T: SomeFloat](t: Tensor[T]): Tensor[T]
reduce:
higher_order_foldreduce: proc reduce[T](arg: Tensor[T]; f: (T, T) -> T): T
higher_order_foldreduce: proc reduce[T](arg: Tensor[T]; f: (Tensor[T], Tensor[T]) -> Tensor[T]; axis: int): Tensor[ T]
reduce_axis_inline:
higher_order_foldreduce: template reduce_axis_inline[T](arg: Tensor[T]; reduction_axis: int; op: untyped): untyped
reduce_inline:
higher_order_foldreduce: template reduce_inline[T](arg: Tensor[T]; op: untyped): untyped
register_node:
autograd_common: proc register_node[TT](name: static string; gate: Gate[TT]; backward: Backward[TT]; result: Variable[TT] or seq[Variable[TT]]; parents: varargs[Variable[TT]])
relative_error:
common_error_functions: proc relative_error[T: SomeFloat](y, y_true: T | Complex[T]): T
common_error_functions: proc relative_error[T: SomeFloat](y, y_true: Tensor[T] | Tensor[Complex[T]]): Tensor[ T]
RelaxedRankOne:
p_accessors_macros_write: const RelaxedRankOne
relu:
nnp_activation: proc relu[T](t: Tensor[T]): Tensor[T]
relu: proc relu[TT](a: Variable[TT]): Variable[TT]
ReluActivation:
relu: type ReluActivation
relu_backward:
nnp_activation: proc relu_backward[T](gradient: Tensor[T]; cached_tensor: Tensor[T]): Tensor[T]
repeat_values:
shapeshifting: proc repeat_values[T](t: Tensor[T]; reps: int; axis = -1): Tensor[T]
shapeshifting: proc repeat_values[T](t: Tensor[T]; reps: openArray[int]): Tensor[T]
shapeshifting: proc repeat_values[T](t: Tensor[T]; reps: Tensor[int]): Tensor[T]
replaceNodes:
ast_utils: proc replaceNodes(ast: NimNode; replacements: NimNode; to_replace: NimNode): NimNode
replaceSymsByIdents:
ast_utils: proc replaceSymsByIdents(ast: NimNode): NimNode
reshape:
gates_shapeshifting_views: proc reshape[TT](a: Variable[TT]; shape: Metadata): Variable[TT]
gates_shapeshifting_views: proc reshape[TT](a: Variable[TT]; shape: varargs[int]): Variable[TT]
shapeshifting: proc reshape(t: Tensor; new_shape: Metadata): Tensor
shapeshifting: proc reshape(t: Tensor; new_shape: varargs[int]): Tensor
shapeshifting_cuda: proc reshape(t: CudaTensor; new_shape: varargs[int]): CudaTensor
ReshapeGate:
gates_shapeshifting_views: type ReshapeGate
reshapeImpl:
p_shapeshifting: proc reshapeImpl(t: AnyTensor; new_shape: varargs[int] | Metadata | seq[int]; result: var AnyTensor; infer: static bool)
reshape_infer:
shapeshifting: proc reshape_infer(t: Tensor; new_shape: varargs[int]): Tensor
reshape_no_copy:
p_shapeshifting: proc reshape_no_copy(t: AnyTensor; new_shape: varargs[int] | Metadata | seq[int]; result: var AnyTensor; layout: OrderType)
reshape_with_copy:
p_shapeshifting: proc reshape_with_copy[T](t: Tensor[T]; new_shape: varargs[int] | Metadata | seq[int]; result: var Tensor[T])
returnEmptyIfEmpty:
p_empty_tensors: macro returnEmptyIfEmpty(tensors: varargs[untyped]): untyped
reversed:
dynamic_stack_arrays: proc reversed(a: DynamicStackArray): DynamicStackArray
dynamic_stack_arrays: proc reversed(a: DynamicStackArray; result: var DynamicStackArray)
rewriteTensor_AddMultiply:
optim_ops_fusion: template rewriteTensor_AddMultiply{ C + `*`(A, B) }[T](A, B, C: Tensor[T]): auto
rewriteTensor_MultiplyAdd:
optim_ops_fusion: template rewriteTensor_MultiplyAdd{ `*`(A, B) + C }[T](A, B, C: Tensor[T]): auto
rewriteTensor_MultiplyAdd_inplace:
optim_ops_fusion: template rewriteTensor_MultiplyAdd_inplace{ C += `*`(A, B) }[T](A, B: Tensor[T]; C: var Tensor[T])
rewriteToTensorReshape:
optim_ops_fusion: template rewriteToTensorReshape{ reshape(toTensor(oa, dummy_bugfix), shape) }(oa: openArray; shape: varargs[int]; dummy_bugfix: static[int]): auto
roll:
shapeshifting: proc roll[T](t: Tensor[T]; shift: int): Tensor[T]
shapeshifting: proc roll[T](t: Tensor[T]; shift: int; axis: Natural): Tensor[T]
round:
ufunc: proc round[T](t`gensym28: Tensor[T]): Tensor[T]
ufunc: proc round[T](t`gensym28: Tensor[T]): Tensor[T]
round_step_down:
align_unroller: proc round_step_down(x: Natural; step: static Natural): int
round_step_up:
align_unroller: proc round_step_up(x: Natural; step: static Natural): int
same:
math_functions: ConvolveMode.same
set_diagonal:
special_matrices: proc set_diagonal[T](a: var Tensor[T]; d: Tensor[T]; k = 0; anti = false)
setDiff:
algorithms: proc setDiff[T](t1, t2: Tensor[T]; symmetric = false): Tensor[T]
setLen:
dynamic_stack_arrays: proc setLen(a: var DynamicStackArray; len: int)
setZero:
initialization: proc setZero[T](t: var Tensor[T]; check_contiguous: static bool = true)
SGD:
optimizers: object SGD
SGDMomentum:
optimizers: object SGDMomentum
sgn:
math_functions: proc sgn[T: SomeNumber](t: Tensor[T]): Tensor[int]
shape_to_strides:
data_structure: proc shape_to_strides(shape: Metadata; layout: OrderType = rowMajor; result: var Metadata)
sigmoid:
nnp_activation: proc sigmoid[T: SomeFloat](t: Tensor[T]): Tensor[T]
p_activation: proc sigmoid[T: SomeFloat](x: T): T
sigmoid: proc sigmoid[TT](a: Variable[TT]): Variable[TT]
SigmoidActivation:
sigmoid: type SigmoidActivation
sigmoid_backward:
nnp_activation: proc sigmoid_backward[T](gradient: Tensor[T]; cached_tensor: Tensor[T]): Tensor[T]
sigmoid_cross_entropy:
cross_entropy_losses: proc sigmoid_cross_entropy[TT](a`gensym0: Variable[TT]; target`gensym0: TT): Variable[ TT]
nnp_sigmoid_cross_entropy: proc sigmoid_cross_entropy[T](input, target: Tensor[T]): T
sigmoid_cross_entropy_backward:
nnp_sigmoid_cross_entropy: proc sigmoid_cross_entropy_backward[T](gradient: Tensor[T] or T; cached_tensor: Tensor[T]; target: Tensor[T]): Tensor[ T]
SigmoidCrossEntropyLoss:
cross_entropy_losses: type SigmoidCrossEntropyLoss
sin:
ufunc: proc sin[T](t`gensym18: Tensor[T]): Tensor[T]
ufunc: proc sin[T](t`gensym18: Tensor[T]): Tensor[T]
sinc:
math_functions: proc sinc[T: SomeFloat](x: T; normalized: static bool = true): T
math_functions: proc sinc[T: SomeFloat](t: Tensor[T]; normalized: static bool = true): Tensor[T]
sinh:
ufunc: proc sinh[T](t`gensym17: Tensor[T]): Tensor[T]
ufunc: proc sinh[T](t`gensym17: Tensor[T]): Tensor[T]
size:
data_structure: proc size[T](t: CudaTensor[T] or ClTensor[T]): Natural
datatypes: proc size[T](t: Tensor[T]): Natural
Size2D:
p_nnp_types: tuple Size2D
SizeHW:
cudnn_conv_interface: type SizeHW
skipIfEmpty:
p_empty_tensors: template skipIfEmpty(t: typed): untyped
sliceDispatchImpl:
p_accessors_macros_read: proc sliceDispatchImpl(result: NimNode; args: NimNode; isRead: bool)
slicer:
p_accessors_macros_read: proc slicer[T](t: AnyTensor[T]; ellipsis: Ellipsis; slices: openArray[SteppedSlice]): AnyTensor[ T]
p_accessors_macros_read: proc slicer[T](t: AnyTensor[T]; slices: openArray[SteppedSlice]): AnyTensor[T]
p_accessors_macros_read: proc slicer[T](t: AnyTensor[T]; slices: openArray[SteppedSlice]; ellipsis: Ellipsis): AnyTensor[ T]
p_accessors_macros_read: proc slicer[T](t: AnyTensor[T]; slices1: openArray[SteppedSlice]; ellipsis: Ellipsis; slices2: openArray[SteppedSlice]): AnyTensor[T]
p_accessors_macros_read: proc slicer[T](t: Tensor[T]; slices: ArrayOfSlices): Tensor[T]
slicerImpl:
p_accessors_macros_read: template slicerImpl[T](result: AnyTensor[T] | var AnyTensor[T]; slices: ArrayOfSlices): untyped
slicerMut:
p_accessors_macros_write: proc slicerMut[T](t: var Tensor[T]; ellipsis: Ellipsis; slices: openArray[SteppedSlice]; oa: openArray)
p_accessors_macros_write: proc slicerMut[T](t: var Tensor[T]; ellipsis: Ellipsis; slices: openArray[SteppedSlice]; val: T)
p_accessors_macros_write: proc slicerMut[T](t: var Tensor[T]; ellipsis: Ellipsis; slices: openArray[SteppedSlice]; t2: Tensor[T])
p_accessors_macros_write: proc slicerMut[T](t: var Tensor[T]; slices: openArray[SteppedSlice]; ellipsis: Ellipsis; oa: openArray)
p_accessors_macros_write: proc slicerMut[T](t: var Tensor[T]; slices1: openArray[SteppedSlice]; ellipsis: Ellipsis; slices2: openArray[SteppedSlice]; oa: openArray)
p_accessors_macros_write: proc slicerMut[T](t: var Tensor[T]; slices1: openArray[SteppedSlice]; ellipsis: Ellipsis; slices2: openArray[SteppedSlice]; val: T)
p_accessors_macros_write: proc slicerMut[T](t: var Tensor[T]; slices1: openArray[SteppedSlice]; ellipsis: Ellipsis; slices2: openArray[SteppedSlice]; t2: Tensor[T])
p_accessors_macros_write: proc slicerMut[T](t: var Tensor[T]; slices: openArray[SteppedSlice]; ellipsis: Ellipsis; val: T)
p_accessors_macros_write: proc slicerMut[T](t: var Tensor[T]; slices: openArray[SteppedSlice]; ellipsis: Ellipsis; t2: Tensor[T])
p_accessors_macros_write: proc slicerMut[T](t: var Tensor[T]; slices: openArray[SteppedSlice]; oa: openArray)
p_accessors_macros_write: proc slicerMut[T](t: var Tensor[T]; slices: openArray[SteppedSlice]; val: T)
p_accessors_macros_write: proc slicerMut[T](t: var Tensor[T]; slices: openArray[SteppedSlice]; t2: Tensor[T])
slice_typed_dispatch:
p_accessors_macros_read: macro slice_typed_dispatch(t: typed; args: varargs[typed]): untyped
slice_typed_dispatch_mut:
p_accessors_macros_write: macro slice_typed_dispatch_mut(t: typed; args: varargs[typed]; val: typed): untyped
slice_typed_dispatch_var:
p_accessors_macros_write: macro slice_typed_dispatch_var(t: typed; args: varargs[typed]): untyped
SmallDiffs:
autograd_common: type SmallDiffs
softmax:
nnp_softmax: proc softmax[T](input: Tensor[T]): Tensor[T]
softmax: proc softmax[TT](a: Variable[TT]): Variable[TT]
SoftmaxActivation:
softmax: type SoftmaxActivation
softmax_cross_entropy:
cross_entropy_losses: proc softmax_cross_entropy[TT](a`gensym1: Variable[TT]; target`gensym1: TT): Variable[ TT]
nnp_softmax_cross_entropy: proc softmax_cross_entropy[T](input, target: Tensor[T]): T
softmax_cross_entropy_backward:
nnp_softmax_cross_entropy: proc softmax_cross_entropy_backward[T](gradient: Tensor[T] or T; cached_tensor: Tensor[T]; target: Tensor[T]): Tensor[ T]
SoftmaxCrossEntropyLoss:
cross_entropy_losses: type SoftmaxCrossEntropyLoss
solve:
linear_systems: proc solve[T: SomeFloat](a, b: Tensor[T]; kind: MatrixKind = mkGeneral): Tensor[T]
sort:
algorithms: proc sort[T](t: var Tensor[T]; order = SortOrder.Ascending)
sorted:
algorithms: proc sorted[T](t: Tensor[T]; order = SortOrder.Ascending): Tensor[T]
sparse_softmax_cross_entropy:
cross_entropy_losses: proc sparse_softmax_cross_entropy[TT; Idx: SomeNumber or byte or char or enum]( a: Variable[TT]; target: Tensor[Idx]): Variable[TT]
nnp_softmax_cross_entropy: proc sparse_softmax_cross_entropy[T; Idx: SomeNumber or byte or char or enum]( input: Tensor[T]; target: Tensor[Idx]): T
sparse_softmax_cross_entropy_backward:
nnp_softmax_cross_entropy: proc sparse_softmax_cross_entropy_backward[T; Idx: SomeNumber or byte or char or enum]( gradient: Tensor[T] or T; cached_tensor: Tensor[T]; target: Tensor[Idx]): Tensor[ T]
SparseSoftmaxCrossEntropyLoss:
cross_entropy_losses: type SparseSoftmaxCrossEntropyLoss
split:
shapeshifting: proc split[T](t: Tensor[T]; chunk_size: Positive; axis: Natural): seq[Tensor[T]]
sqrt:
ufunc: proc sqrt[T](t`gensym3: Tensor[T]): Tensor[T]
ufunc: proc sqrt[T](t`gensym3: Tensor[T]): Tensor[T]
square:
math_functions: proc square[T](t`gensym101: Tensor[T]): Tensor[T]
math_functions: proc square[T](x: T): T
math_functions: proc square[T](x: T): T
math_functions: proc square[T](t`gensym101: Tensor[T]): Tensor[T]
squared_error:
common_error_functions: proc squared_error[T: SomeFloat](y, y_true: Tensor[T] | Tensor[Complex[T]]): Tensor[T]
common_error_functions: proc squared_error[T: SomeFloat](y, y_true: Complex[T]): T
common_error_functions: proc squared_error[T: SomeFloat](y, y_true: T): T
squeeze:
gates_shapeshifting_views: proc squeeze[TT](v`gensym0: Variable[TT]; axis`gensym0: Natural): Variable[TT]
shapeshifting: proc squeeze(t: AnyTensor): AnyTensor
shapeshifting: proc squeeze(t: Tensor; axis: Natural): Tensor
shapeshifting_cuda: proc squeeze(t: CudaTensor; axis: int): CudaTensor
squeezeImpl:
p_shapeshifting: proc squeezeImpl(t: var AnyTensor)
p_shapeshifting: proc squeezeImpl(t: var AnyTensor; axis: int)
stable_softmax:
p_logsumexp: proc stable_softmax[T](x, max, sumexp: T): T
stack:
gates_shapeshifting_concat_split: proc stack[TT](variables: varargs[Variable[TT]]; axis = 0): Variable[TT]
shapeshifting: proc stack[T](tensors: varargs[Tensor[T]]; axis: Natural = 0): Tensor[T]
std:
aggregate: proc std[T: SomeFloat](arg: Tensor[T]): T
aggregate: proc std[T: SomeFloat](arg: Tensor[T]; axis: int): Tensor[T]
Step:
accessors_macros_syntax: object Step
SteppedSlice:
accessors_macros_syntax: object SteppedSlice
streaming_max_sumexp:
p_logsumexp: proc streaming_max_sumexp[T](t: Tensor[T]): tuple[max: T, sumexp: T]
p_logsumexp: proc streaming_max_sumexp[T](t: Tensor[T]; axis: int): Tensor[ tuple[max: T, sumexp: T]]
stride:
gemm_utils: proc stride[T](view: MatrixView[T]; row, col: Natural): MatrixView[T]
stridedBodyTemplate:
foreach_common: template stridedBodyTemplate(): untyped
stridedChunkOffset:
foreach_common: template stridedChunkOffset(): untyped
stridedCoordsIteration:
p_accessors: template stridedCoordsIteration(t, iter_offset, iter_size: typed): untyped
stridedIteration:
p_accessors: template stridedIteration(strider: IterKind; t, iter_offset, iter_size: typed): untyped
stridedIterationYield:
p_accessors: template stridedIterationYield(strider: IterKind; data, i, iter_pos: typed)
stridedVarsSetup:
foreach_common: template stridedVarsSetup(): untyped
SubGate:
gates_basic: type SubGate
sum:
aggregate: proc sum[T](arg: Tensor[T]): T
aggregate: proc sum[T](arg: Tensor[T]; axis: int): Tensor[T]
gates_reduce: proc sum[TT](a: Variable[TT]): Variable[TT]
SumGate:
gates_reduce: type SumGate
SupportedDecomposition:
decomposition_lapack: type SupportedDecomposition
svd:
decomposition: proc svd[T: SupportedDecomposition](A: Tensor[T]): auto
decomposition: proc svd[T: SupportedDecomposition; U: SomeFloat](A: Tensor[T]; _: typedesc[U]): tuple[ U: Tensor[T], S: Tensor[U], Vh: Tensor[T]]
svd_randomized:
decomposition_rand: proc svd_randomized[T](A: Tensor[T]; n_components = 2; n_oversamples = 5; n_power_iters = 2): tuple[U, S, Vh: Tensor[T]]
syevr:
decomposition_lapack: proc syevr[T: SupportedDecomposition](a: var Tensor[T]; uplo: static char; return_eigenvectors: static bool; low_idx: int; high_idx: int; eigenval, eigenvec: var Tensor[T]; scratchspace: var seq[T])
symeig:
decomposition: proc symeig[T: SupportedDecomposition](a: Tensor[T]; return_eigenvectors: static bool = false; uplo: static char = 'U'): tuple[ eigenval, eigenvec: Tensor[T]]
decomposition: proc symeig[T: SupportedDecomposition](a: Tensor[T]; return_eigenvectors: static bool = false; uplo: static char = 'U'; slice: HSlice[[type node], [type node]]): tuple[ eigenval, eigenvec: Tensor[T]]
syrk:
auxiliary_blas: proc syrk[T: SomeFloat](alpha: T; A: Tensor[T]; mul_order: static SyrkKind; beta: T; C: var Tensor[T]; uplo: static char)
SyrkKind:
auxiliary_blas: enum SyrkKind
tan:
ufunc: proc tan[T](t`gensym19: Tensor[T]): Tensor[T]
ufunc: proc tan[T](t`gensym19: Tensor[T]): Tensor[T]
tanh:
nnp_activation: proc tanh[T: SomeFloat](t: Tensor[T]): Tensor[T]
tanh: proc tanh[TT](a: Variable[TT]): Variable[TT]
ufunc: proc tanh[T](t`gensym20: Tensor[T]): Tensor[T]
ufunc: proc tanh[T](t`gensym20: Tensor[T]): Tensor[T]
TanhActivation:
tanh: type TanhActivation
tanh_backward:
nnp_activation: proc tanh_backward[T](gradient: Tensor[T]; cached_tensor: Tensor[T]): Tensor[T]
Tensor:
datatypes: object Tensor
tile:
shapeshifting: proc tile[T](t: Tensor[T]; reps: varargs[int]): Tensor[T]
Tiles:
gemm_tiling: type Tiles
tnInner:
kdtree: TreeNodeKind.tnInner
tnLeaf:
kdtree: TreeNodeKind.tnLeaf
toArrayOfSlices:
accessors_macros_syntax: proc toArrayOfSlices(s: varargs[SteppedSlice]): ArrayOfSlices
toClpointer:
opencl_backend: proc toClpointer[T](p: ptr T | ptr UncheckedArray[T]): Pmem
opencl_backend: proc toClpointer[T](p: ClStorage[T]): Pmem
opencl_backend: proc toClpointer[T](p: ClTensor[T]): Pmem
to_csv:
io_csv: proc to_csv[T](tensor: Tensor[T]; csvPath: string; separator = ',')
toeplitz:
special_matrices: proc toeplitz[T](c: Tensor[T]): Tensor[T]
special_matrices: proc toeplitz[T](c, r: Tensor[T]): Tensor[T]
toFlatSeq:
exporting: proc toFlatSeq[T](t: Tensor[T]): seq[T]
toHashSet:
initialization: proc toHashSet[T](t: Tensor[T]): HashSet[T]
toMatrixView:
gemm_utils: proc toMatrixView[T](data: ptr T; rowStride, colStride: int): MatrixView[T]
toMetadata:
initialization: proc toMetadata(s: varargs[int]): Metadata
initialization: template toMetadata(m: Metadata): Metadata
toMetadataArray:
datatypes: proc toMetadataArray(s: varargs[int]): Metadata
to_ptr:
gemm_utils: template to_ptr(AB: typed; MR, NR: static int; T: typedesc): untyped
toRawSeq:
exporting: proc toRawSeq[T](t: Tensor[T]): seq[T]
toSeq1D:
exporting: proc toSeq1D[T](t: Tensor[T]): seq[T]
toSeq2D:
exporting: proc toSeq2D[T](t: Tensor[T]): seq[seq[T]]
toSeq3D:
exporting: proc toSeq3D[T](t: Tensor[T]): seq[seq[seq[T]]]
toSeq4D:
exporting: proc toSeq4D[T](t: Tensor[T]): seq[seq[seq[seq[T]]]]
toSeq5D:
exporting: proc toSeq5D[T](t: Tensor[T]): seq[seq[seq[seq[seq[T]]]]]
toTensor:
initialization: proc toTensor[T](a: openArray[T]): auto
initialization: proc toTensor[T](a: SomeSet[T]): auto
toUnsafeView:
initialization: proc toUnsafeView[T: KnownSupportsCopyMem](t: Tensor[T]; aligned: static bool = true): ptr UncheckedArray[ T]
transpose:
shapeshifting: proc transpose(t: Tensor): Tensor
shapeshifting_cuda: proc transpose(t: CudaTensor): CudaTensor
TreeNodeKind:
kdtree: enum TreeNodeKind
tri:
special_matrices: proc tri[T](shape_ax1, shape_ax0: int; k: static int = 0; upper: static bool = false): Tensor[ T]
special_matrices: proc tri[T](shape: Metadata; k: static int = 0; upper: static bool = false): Tensor[T]
triangular:
distributions: proc triangular(x: float): float
distributions: template triangular[T](t`gensym1: Tensor[T]): Tensor[float]
triangularKernel:
kde: proc triangularKernel(x`gensym2, x_i`gensym2, bw`gensym2: float): float
trigonometric:
distributions: proc trigonometric(x: float): float
distributions: template trigonometric[T](t`gensym2: Tensor[T]): Tensor[float]
trigonometricKernel:
kde: proc trigonometricKernel(x`gensym3, x_i`gensym3, bw`gensym3: float): float
tril:
triangular: proc tril[T](a: Tensor[T]; k: static int = 0): Tensor[T]
tril_unit_diag:
triangular: proc tril_unit_diag[T](a: Tensor[T]): Tensor[T]
tril_unit_diag_mut:
triangular: proc tril_unit_diag_mut[T](a: var Tensor[T])
tripleStridedIteration:
p_accessors: template tripleStridedIteration(strider: IterKind; t1, t2, t3, iter_offset, iter_size: typed): untyped
tripleStridedIterationYield:
p_accessors: template tripleStridedIterationYield(strider: IterKind; t1data, t2data, t3data, i, t1_iter_pos, t2_iter_pos, t3_iter_pos: typed)
triu:
triangular: proc triu[T](a: Tensor[T]; k: static int = 0): Tensor[T]
trunc:
ufunc: proc trunc[T](t`gensym27: Tensor[T]): Tensor[T]
ufunc: proc trunc[T](t`gensym27: Tensor[T]): Tensor[T]
ukernel_generator:
gemm_ukernel_generator: macro ukernel_generator(simd: static CPUFeatureX86; typ: untyped; vectype: untyped; nb_scalars: static int; simd_setZero: untyped; simd_broadcast_value: untyped; simd_load_aligned: untyped; simd_load_unaligned: untyped; simd_store_unaligned: untyped; simd_mul: untyped; simd_add: untyped; simd_fma: untyped): untyped
ukernel_generic_impl:
gemm_ukernel_generic: template ukernel_generic_impl()
ukernel_simd_impl:
gemm_ukernel_generator: macro ukernel_simd_impl(ukernel: static MicroKernel; V: untyped; A, B: untyped; kc: int; simd_setZero, simd_load_aligned, simd_broadcast_value, simd_fma: untyped): untyped
union:
algorithms: proc union[T](t1, t2: Tensor[T]): Tensor[T]
unique:
algorithms: proc unique[T](t: Tensor[T]; isSorted = false): Tensor[T]
algorithms: proc unique[T](t: Tensor[T]; order: SortOrder): Tensor[T]
unsafe_raw_buf:
datatypes: proc unsafe_raw_buf[T: KnownSupportsCopyMem](t: Tensor[T]; aligned: static bool = true): RawImmutableView[ T]
datatypes: proc unsafe_raw_buf[T: KnownSupportsCopyMem](t: var Tensor[T]; aligned: static bool = true): RawMutableView[ T]
datatypes: proc unsafe_raw_buf[T: not KnownSupportsCopyMem](t: Tensor[T]; aligned: static bool = true): ptr UncheckedArray[T]
unsafe_raw_offset:
datatypes: proc unsafe_raw_offset[T: KnownSupportsCopyMem](t: Tensor[T]; aligned: static bool = true): RawImmutableView[T]
datatypes: proc unsafe_raw_offset[T: KnownSupportsCopyMem](t: var Tensor[T]; aligned: static bool = true): RawMutableView[T]
datatypes: proc unsafe_raw_offset[T: not KnownSupportsCopyMem](t: Tensor[T]; aligned: static bool = true): ptr UncheckedArray[T]
unsqueeze:
gates_shapeshifting_views: proc unsqueeze[TT](v`gensym1: Variable[TT]; axis`gensym1: Natural): Variable[TT]
shapeshifting: proc unsqueeze(t: Tensor; axis: Natural): Tensor
shapeshifting_cuda: proc unsqueeze(t: CudaTensor; axis: int): CudaTensor
unsqueezeImpl:
p_shapeshifting: proc unsqueezeImpl(t: var AnyTensor; axis: int)
unwrap_period:
aggregate: proc unwrap_period[T: SomeNumber](t: Tensor[T]; discont: T = -1; axis = -1; period: T = default(T)): Tensor[T]
update:
optimizers: proc update(self: var Adam)
optimizers: proc update(self: SGD)
optimizers: proc update(self: var SGDMomentum)
valid:
math_functions: ConvolveMode.valid
Values:
p_accessors: IterKind.Values
vander:
special_matrices: proc vander(order: int = -1; increasing = false): Tensor[float]
special_matrices: proc vander[T](x: Tensor[T]; order: int = -1; increasing = false): Tensor[float]
vandermonde:
special_matrices: proc vandermonde(order: int; increasing = true): Tensor[float]
special_matrices: proc vandermonde[T](x: Tensor[T]; order: int = -1; increasing = true): Tensor[float]
special_matrices: proc vandermonde[T](x: Tensor[T]; orders: Tensor[SomeNumber]): Tensor[float]
Variable:
autograd_common: type Variable
variable:
autograd_common: proc variable[TT](ctx: Context[TT]; value: TT; requires_grad = false): Variable[TT]
variance:
aggregate: proc variance[T: SomeFloat](arg: Tensor[T]): T
aggregate: proc variance[T: SomeFloat](arg: Tensor[T]; axis: int): Tensor[T]
whitespaceTokenizer:
tokenizers: iterator whitespaceTokenizer(input: Tensor[string]): seq[string]
withCompilerOptimHints:
compiler_optim_hints: template withCompilerOptimHints()
with_diagonal:
special_matrices: proc with_diagonal[T](a: Tensor[T]; d: Tensor[T]; k = 0; anti = false): Tensor[T]
withMemoryOptimHints:
memory_optimization_hints: template withMemoryOptimHints()
write_bmp:
io_image: proc write_bmp(img: Tensor[uint8]; filepath: string)
write_hdf5:
io_hdf5: proc write_hdf5[T: SomeNumber](h5f: var H5FileObj; t: Tensor[T]; name, group: Option[string])
io_hdf5: proc write_hdf5[T: SomeNumber](h5f: var H5FileObj; t: Tensor[T]; name, group = "")
io_hdf5: proc write_hdf5[T: SomeNumber](t: Tensor[T]; hdf5Path: string; name, group = "")
write_jpg:
io_image: proc write_jpg(img: Tensor[uint8]; filepath: string; quality = 100)
write_npy:
io_npy: proc write_npy[T: SomeNumber](t: Tensor[T]; npyPath: string)
write_png:
io_image: proc write_png(img: Tensor[uint8]; filepath: string)
write_tga:
io_image: proc write_tga(img: Tensor[uint8]; filepath: string)
x86_AVX:
gemm_tiling: CPUFeatureX86.x86_AVX
x86_AVX2:
gemm_tiling: CPUFeatureX86.x86_AVX2
x86_AVX512:
gemm_tiling: CPUFeatureX86.x86_AVX512
x86_AVX_FMA:
gemm_tiling: CPUFeatureX86.x86_AVX_FMA
x86_Generic:
gemm_tiling: CPUFeatureX86.x86_Generic
x86only:
gemm_ukernel_generator: template x86only(): untyped
x86_SSE:
gemm_tiling: CPUFeatureX86.x86_SSE
x86_SSE2:
gemm_tiling: CPUFeatureX86.x86_SSE2
x86_SSE4_1:
gemm_tiling: CPUFeatureX86.x86_SSE4_1
x86_ukernel:
gemm_tiling: proc x86_ukernel(cpu: CPUFeatureX86; T: typedesc; c_unit_stride: bool): MicroKernel
xavier_normal:
init: proc xavier_normal(shape: varargs[int]; T: type): Tensor[T]
xavier_uniform:
init: proc xavier_uniform(shape: varargs[int]; T: type): Tensor[T]
xygrid:
special_matrices: MeshGridIndexing.xygrid
yann_normal:
init: proc yann_normal(shape: varargs[int]; T: type): Tensor[T]
yann_uniform:
init: proc yann_uniform(shape: varargs[int]; T: type): Tensor[T]
zeroGrads:
optimizers: proc zeroGrads(o: Optimizer)
zeros:
init_cpu: proc zeros[T: SomeNumber | Complex[float32] | Complex[float64] | bool]( shape: Metadata): Tensor[T]
init_cpu: proc zeros[T: SomeNumber | Complex[float32] | Complex[float64] | bool]( shape: varargs[int]): Tensor[T]
zeros_like:
init_cpu: proc zeros_like[T: SomeNumber | Complex[float32] | Complex[float64] | bool]( t: Tensor[T]): Tensor[T]
init_cuda: proc zeros_like[T: SomeFloat](t: CudaTensor[T]): CudaTensor[T]
init_opencl: proc zeros_like[T: SomeFloat](t: ClTensor[T]): ClTensor[T]
zip:
accessors: iterator zip[T, U](t1: Tensor[T]; t2: Tensor[U]): (T, U)
accessors: iterator zip[T, U](t1: Tensor[T]; t2: Tensor[U]; offset, size: int): (T, U)
accessors: iterator zip[T, U, V](t1: Tensor[T]; t2: Tensor[U]; t3: Tensor[V]): (T, U, V)
accessors: iterator zip[T, U, V](t1: Tensor[T]; t2: Tensor[U]; t3: Tensor[V]; offset, size: int): ( T, U, V)
dynamic_stack_arrays: iterator zip[T, U](a: DynamicStackArray[T]; b: DynamicStackArray[U]): (T, T)
zipAxis:
accessors: iterator zipAxis[T, U](a: Tensor[T]; b: Tensor[U]; axis: int): tuple[a: Tensor[T], b: Tensor[U]]
Arraymancer
Technical reference
Core tensor API
accessors
accessors_macros_read
accessors_macros_syntax
accessors_macros_write
aggregate
algorithms
blas_l3_gemm
complex
cublas
cuda
cuda_global_state
data_structure
display
display_cuda
einsum
exporting
filling_data
higher_order_applymap
higher_order_foldreduce
incl_accessors_cuda
incl_higher_order_cuda
incl_kernels_cuda
init_copy_cpu
init_copy_cuda
init_cpu
init_cuda
init_opencl
lapack
math_functions
memory_optimization_hints
naive_l2_gemv
opencl_backend
opencl_global_state
openmp
operators_blas_l1
operators_blas_l1_cuda
operators_blas_l1_opencl
operators_blas_l2l3
operators_blas_l2l3_cuda
operators_blas_l2l3_opencl
operators_broadcasted
operators_broadcasted_cuda
operators_broadcasted_opencl
operators_comparison
operators_logical
optim_ops_fusion
p_accessors
p_accessors_macros_desugar
p_accessors_macros_read
p_accessors_macros_write
p_checks
p_complex
p_display
p_empty_tensors
p_init_cuda
p_init_opencl
p_kernels_interface_cuda
p_kernels_interface_opencl
p_operator_blas_l2l3
p_shapeshifting
selectors
shapeshifting
shapeshifting_cuda
shapeshifting_opencl
syntactic_sugar
tensor_cuda
tensor_opencl
ufunc
Neural network API
Layers: Convolution 2D
Loss: Cross-Entropy losses
Layers: Embedding
flatten
gcn
Layers: GRU (Gated Linear Unit)
Layers: Initializations
Layers: Linear/Dense
Layers: Maxpool 2D
Loss: Mean Square Error
Neural network: Declaration
Optimizers
Activation: Relu (Rectified linear Unit)
Activation: Sigmoid
Softmax
Activation: Tanh
Linear algebra, stats, ML
Accuracy score
algebra
auxiliary_blas
auxiliary_lapack
Common errors, MAE and MSE (L1, L2 loss)
dbscan
Eigenvalue decomposition
decomposition_lapack
Randomized Truncated SVD
distributions
init_colmajor
kde
K-Means
Least squares solver
least_squares_lapack
Linear systems solver
overload
Principal Component Analysis (PCA)
solve_lapack
Special linear algebra matrices
Statistics
triangular
IO & Datasets
IMDB
CSV reading and writing
HDF5 files reading and writing
Images reading and writing
Numpy files reading and writing
io_stream_readers
MNIST
util
Autograd
Data structure
Basic operations
Linear algebra operations
Hadamard product (elementwise matrix multiply)
Reduction operations
Concatenation, stacking, splitting, chunking operations
Linear algebra operations
Neuralnet primitives
conv
cudnn
cudnn_conv_interface
Activations
Convolution 2D - CuDNN
Convolution 2D
Embeddings
Gated Recurrent Unit (GRU)
Linear / Dense layer
Maxpooling
Numerical gradient
Sigmoid Cross-Entropy loss
Softmax
Softmax Cross-Entropy loss
nnpack
nnpack_interface
p_activation
p_logsumexp
p_nnp_checks
p_nnp_types
Other docs
align_unroller
ast_utils
compiler_optim_hints
cpuinfo_x86
datatypes
deprecate
dynamic_stack_arrays
foreach
foreach_common
foreach_staged
functional
gemm
gemm_packing
gemm_prepacked
gemm_tiling
gemm_ukernel_avx
gemm_ukernel_avx2
gemm_ukernel_avx512
gemm_ukernel_avx_fma
gemm_ukernel_dispatch
gemm_ukernel_generator
gemm_ukernel_generic
gemm_ukernel_sse
gemm_ukernel_sse2
gemm_ukernel_sse4_1
gemm_utils
global_config
initialization
math_ops_fusion
memory
nested_containers
openmp
sequninit
simd
tokenizers
Tutorial
First steps
Taking a slice of a tensor
Matrix & vectors operations
Broadcasted operations
Transposing, Reshaping, Permuting, Concatenating
Map & Reduce
Basic iterators
Spellbook (How-To's)
How to convert a Tensor type?
How to create a new universal function?
How to create a multilayer perceptron?
Under the hood
How Arraymancer achieves its speed?
Why does `=` share data by default aka reference semantics?
Working with OpenCL and Cuda in Nim