Tiny ML Tensorlab User Guide

Welcome to the Tiny ML Tensorlab documentation! This comprehensive guide covers Texas Instruments’ end-to-end AI toolchain for developing, training, optimizing, and deploying machine learning models on resource-constrained microcontrollers.

Note

This documentation is for Tiny ML Tensorlab version 1.3.0.



What is Tiny ML Tensorlab?

Tiny ML Tensorlab is Texas Instruments’ complete solution for bringing AI to microcontrollers. The toolchain enables you to:

  • Train machine learning models for time series and image classification tasks

  • Optimize models using quantization (2-bit, 4-bit, 8-bit) for embedded deployment

  • Compile models to run efficiently on TI MCUs, with optional NPU acceleration

  • Deploy models using Code Composer Studio (CCS)

Supported Task Types

Task Type

Description

Time Series Classification

Categorize time-series data into discrete classes (e.g., fault detection, activity recognition)

Time Series Regression

Predict continuous values from time-series inputs (e.g., torque estimation)

Time Series Forecasting

Predict future values based on historical patterns (e.g., temperature prediction)

Anomaly Detection

Identify abnormal patterns using autoencoder-based models (e.g., equipment monitoring)

Image Classification

Categorize images into classes (e.g., visual inspection, digit recognition)


Documentation Structure

User Guide

Start with the Introduction to understand the toolchain architecture, then follow the Installation guide to set up your environment.

Task Types

Learn about the different Supported Task Types supported and choose the right one for your application.

Working with Data

The Bring Your Own Data (Bring Your Own Data) section explains dataset formats and preparation.

Target Devices

Browse Supported Devices to find specifications and capabilities for 20+ TI MCUs.

Examples & Applications

The Examples & Applications section provides ready-to-run configurations for common applications.

Advanced Features

Explore Advanced Features like Neural Architecture Search, quantization, and analysis tools.

Deployment

The Device Deployment section covers CCS integration and running models on devices.

Edge AI Studio

Prefer a GUI? See Edge AI Studio Model Composer for our no-code web platform.

Extending the Toolchain

The Bring Your Own Model (Bring Your Own Model) section covers adding custom models.


Additional Resources


Indices and tables