TI Neural Network Compiler for MCUs User’s Guide - 1.3.0¶
Developers interested in adding machine learning to their applications can use the TI Edge AI Studio Model Composer GUI-based or command-line tools to train and compile neural networks for TI Microcontrollers (MCUs). Under the hood, both tools use the standard PyTorch-based training framework (with a TI NPU-specific quantization configuration) and the TI Neural Network Compiler for MCUs.
This user’s guide documents the TI Neural Network Compiler for MCUs and how to use it.
Texas Instruments’ Neural Network Compiler for MCUs enables machine learning networks to be compiled for TI MCUs. An overview of the development flow is shown in the following figure.

Fig. 1 Machine Learning Flow for TI MCUs (with or without NPUs)¶
In the PyTorch training framework, neural networks are trained with optimizations (for example, aggressive quantization) that target TI MCUs. After training, the neural networks are compiled by the TI Neural Network Compiler. Options passed to the compiler determine which of the following actions the generated inference library performs:
Hardware accelerated inference using a Neural network Processing Unit (NPU).
Software-only inference using the CPU on the MCU.
The output from this compiler is an inference library (.h, .a). These files, in turn, are compiled by the MCU compiler along with other application code managed as a Code Composer Studio (CCS) project.