src/arraymancer/tensor/private/incl_accessors_cuda
Theme:
🌗 Match OS
🌑 Dark
🌕 Light
Index
Search:
Group by:
Section
Type
Source
Edit
Arraymancer
Technical reference
Core tensor API
accessors
accessors_macros_read
accessors_macros_syntax
accessors_macros_write
aggregate
algorithms
blas_l3_gemm
complex
cublas
cuda
cuda_global_state
data_structure
display
display_cuda
einsum
exporting
filling_data
higher_order_applymap
higher_order_foldreduce
incl_accessors_cuda
incl_higher_order_cuda
incl_kernels_cuda
init_copy_cpu
init_copy_cuda
init_cpu
init_cuda
init_opencl
lapack
math_functions
memory_optimization_hints
naive_l2_gemv
opencl_backend
opencl_global_state
openmp
operators_blas_l1
operators_blas_l1_cuda
operators_blas_l1_opencl
operators_blas_l2l3
operators_blas_l2l3_cuda
operators_blas_l2l3_opencl
operators_broadcasted
operators_broadcasted_cuda
operators_broadcasted_opencl
operators_comparison
operators_logical
optim_ops_fusion
p_accessors
p_accessors_macros_desugar
p_accessors_macros_read
p_accessors_macros_write
p_checks
p_complex
p_display
p_empty_tensors
p_init_cuda
p_init_opencl
p_kernels_interface_cuda
p_kernels_interface_opencl
p_operator_blas_l2l3
p_shapeshifting
selectors
shapeshifting
shapeshifting_cuda
shapeshifting_opencl
syntactic_sugar
tensor_cuda
tensor_opencl
ufunc
Neural network API
Layers: Convolution 2D
Loss: Cross-Entropy losses
Layers: Embedding
flatten
gcn
Layers: GRU (Gated Linear Unit)
Layers: Initializations
Layers: Linear/Dense
Layers: Maxpool 2D
Loss: Mean Square Error
Neural network: Declaration
Optimizers
Activation: Relu (Rectified linear Unit)
Activation: Sigmoid
Softmax
Activation: Tanh
Linear algebra, stats, ML
Accuracy score
algebra
auxiliary_blas
auxiliary_lapack
Common errors, MAE and MSE (L1, L2 loss)
dbscan
Eigenvalue decomposition
decomposition_lapack
Randomized Truncated SVD
distributions
init_colmajor
kde
K-Means
Least squares solver
least_squares_lapack
Linear systems solver
overload
Principal Component Analysis (PCA)
solve_lapack
Special linear algebra matrices
Statistics
triangular
IO & Datasets
IMDB
CSV reading and writing
HDF5 files reading and writing
Images reading and writing
Numpy files reading and writing
io_stream_readers
MNIST
util
Autograd
Data structure
Basic operations
Linear algebra operations
Hadamard product (elementwise matrix multiply)
Reduction operations
Concatenation, stacking, splitting, chunking operations
Linear algebra operations
Neuralnet primitives
conv
cudnn
cudnn_conv_interface
Activations
Convolution 2D - CuDNN
Convolution 2D
Embeddings
Gated Recurrent Unit (GRU)
Linear / Dense layer
Maxpooling
Numerical gradient
Sigmoid Cross-Entropy loss
Softmax
Softmax Cross-Entropy loss
nnpack
nnpack_interface
p_activation
p_logsumexp
p_nnp_checks
p_nnp_types
Other docs
align_unroller
ast_utils
compiler_optim_hints
cpuinfo_x86
datatypes
deprecate
dynamic_stack_arrays
foreach
foreach_common
foreach_staged
functional
gemm
gemm_packing
gemm_prepacked
gemm_tiling
gemm_ukernel_avx
gemm_ukernel_avx2
gemm_ukernel_avx512
gemm_ukernel_avx_fma
gemm_ukernel_dispatch
gemm_ukernel_generator
gemm_ukernel_generic
gemm_ukernel_sse
gemm_ukernel_sse2
gemm_ukernel_sse4_1
gemm_utils
global_config
initialization
math_ops_fusion
memory
nested_containers
openmp
sequninit
simd
tokenizers
Tutorial
First steps
Taking a slice of a tensor
Matrix & vectors operations
Broadcasted operations
Transposing, Reshaping, Permuting, Concatenating
Map & Reduce
Basic iterators
Spellbook (How-To's)
How to convert a Tensor type?
How to create a new universal function?
How to create a multilayer perceptron?
Under the hood
How Arraymancer achieves its speed?
Why does `=` share data by default aka reference semantics?
Working with OpenCL and Cuda in Nim