MMALIB User Guide
Modules
Here is a list of all modules:
[detail level 123]
 Common definitionsThis module consists of definitions (macros, structures, utility functions) that are commonly applicable to all MMALIB kernels
 Convolutional Neural Networks (CNN) kernelsThis module consists of kernels to implement the core computations occurring in the context of convolutional neural networks
 MMALIB_CNN_convolveBias_row_ixX_ixX_oxXKernel for computing dense CNN convolution with row based processing
 MMALIB_CNN_convolve_col_smallNo_highPrecisionNOTE: This API is now a wrapper to MMALIB_CNN_convolve_col_smallNo_highPrecision_pointwisePost with the lutValues input argument set to NULL. It is recommended to call MMALIB_CNN_convolve_col_smallNo_highPrecision_pointwisePost directly
 MMALIB_CNN_convolve_col_smallNo_highPrecision_reorderWeightsMMALIB_CNN_convolve_col_smallNo_highPrecision requires that the weights be preprocessed into a specific arrangement. The functions in this module perform that preprocessing and other associated tasks
 MMALIB_CNN_convolve_col_smallNo_highPrecision_pointwisePostKernel for computing CNN-style 2D convolution using column major data ordering on the input and output feature maps. This approach computes more quickly if filter grouping is chosen such that Ni=No=1, or if filter grouping is chosen such that NiFrFc < MMA_SIZE, otherwise use regular convolution method MMALIB_CNN_convolve_row_ixX_ixX_oxX. This kernel is also referred to as depth-wise convolution
 MMALIB_CNN_convolve_col_smallNo_highPrecision_pointwisePost_reorderWeightsMMALIB_CNN_convolve_col_smallNo_highPrecision_pointwisePost requires that the weights be preprocessed into a specific arrangement. The functions in this module perform that preprocessing and other associated tasks
 MMALIB_CNN_convolve_col_smallNo_ixX_ixX_oxXKernel for computing CNN-style 2D convolution using column major data ordering on the input and output feature maps. This approach computes more quickly if filter grouping is chosen such that Ni=No=1, or if filter grouping is chosen such that NiFrFc < MMA_SIZE, otherwise use regular convolution method MMALIB_CNN_convolve_row_ixX_ixX_oxX. This kernel is also referred to as depth-wise convolution
 MMALIB_CNN_convolve_col_smallNo_ixX_ixX_oxX_reorderWeightsMMALIB_CNN_convolve_col_smallNo_ixX_ixX_oxX requires that the weights be preprocessed into a specific arrangement. The functions in this module perform that preprocessing and other associated tasks
 MMALIB_CNN_deconvolve_row_ixX_ixX_oxXKernel for computing dense CNN deconvolution with row-based processing and matrix-matrix multiplication
 MMALIB_CNN_fullyConnectedBias_ixX_ixX_oxXKernel provides compute functionality of Fully Connected Layer: \( Y^T = X^T \times H^T + B^T\)
 MMALIB_CNN_tensor_convert_ixX_oxXKernel for converting tensors of various datatypes and formats
 Digital Signal Processing (DSP) kernelsThis module consists of kernels that implement DSP algorithms
 MMALIB_DSP_firSmall_ixX_ixX_oxXKernel for convolving input data with an input filter (filter size <= MMA_SIZE/2). For input filter size > MMA_SIZE/2 refer to MMALIB_DSP_fir_ixX_ixX_oxX
 MMALIB_DSP_fir_ixX_ixX_oxXKernel for convolving input data with an input filter (filter size > MMA_SIZE/2) For input filter size <= MMA_SIZE/2 refer to MMALIB_DSP_firSmall_ixX_ixX_oxX
 Linear Algebra (LINALG) kernelsThis module consists of kernels within the linear algebra scope
 MMALIB_LINALG_matrixMatrixMultiplyAccumulate_ixX_ixX_ixX_oxXKernel for multiplying two matrices with an additive term
 MMALIB_LINALG_matrixMatrixMultiplyBias_ixX_ixX_oxXKernel for multiplying two matrices with bias, scale and shift
 MMALIB_LINALG_matrixMatrixMultiply_ixX_ixX_oxXKernel for multiplying two matrices
 MMALIB_LINALG_matrixTranspose_ixX_oxXKernel for computing the transpose of a matrix
 MMALIB_LINALG_pointwiseMatrixMatrixMultiply_ixX_ixX_oxXKernel for computing the pointwise multiplication of two matrices