TI Neural Network Compiler for MCUs User’s Guide - 1.2.x¶
Machine learning developers use the TI Edge AI Studio Model Composer GUI-based tool or the TI Tiny ML ModelMaker command-line tool to train and compile neural networks for TI Microcontrollers (MCUs). Under the hood, both of these tools use the TI PyTorch based training framework, the C2000 Model Zoo, and the TI Neural Network Compiler for MCUs.
This user’s guide documents the TI Neural Network Compiler for MCUs and how to use it.
Texas Instrument’s Neural Network Compiler for MCUs enables compiling machine learning networks for TI MCUs. The compiler is based on Apache Tensor Virtual Machine (Apache TVM). An example development flow is shown in the following figure.

Fig. 1 Machine Learning Flow for F28P55x (C28x + NPU)¶
In the TI training framework, neural networks are trained with optimizations (for example, aggressive quantization) that target TI MCUs. After training, the neural networks are compiled by the TI Neural Network Compiler. Options passed to the compiler determine whether the generated inference library will perform:
Hardware accelerated inference using a Neural network Processing Unit (NPU) on the F28P55x.
Software only inference on a host MCU such as the F28P65x.
The output from this compiler is an inference library (.h, .a). These files, in turn, are compiled by the MCU compiler along with other application code managed as a Code Composer Studio (CCS) project.