TI Tiny ML Tensorlab
v1.3.0
Contents
1. Introduction
1.1. What is Tiny ML Tensorlab?
1.1.1. Overview
1.1.2. Target Applications
1.1.3. Key Capabilities
1.1.4. Repository Structure
1.1.5. Workflow Summary
1.1.6. Next Steps
1.2. System Architecture
1.2.1. High-Level Architecture
1.2.2. Component Details
1.2.2.1. tinyml-modelzoo
1.2.2.2. tinyml-modelmaker
1.2.2.3. tinyml-tinyverse
1.2.2.4. tinyml-modeloptimization
1.2.3. Which Repository Do I Need?
1.2.4. Data Flow
1.2.5. Configuration System
1.2.6. Integration Points
1.3. Terminology
1.3.1. General ML Terms
1.3.2. Tiny ML Tensorlab Terms
1.3.3. Device & Hardware Terms
1.3.4. Configuration Terms
1.3.5. Data Terms
1.3.6. Model Size Conventions
1.3.7. Abbreviations
1.4. Overview
2. Installation
2.1. Prerequisites
2.1.1. System Requirements
2.1.2. Software Requirements
2.1.3. For Compilation (Optional)
2.1.4. For Device Deployment (Optional)
2.1.5. CUDA (Optional)
2.1.6. Verification Checklist
2.2. User Installation
2.2.1. Quick Install
2.2.2. Running Your First Example
2.2.3. Verifying Installation
2.2.4. Updating
2.2.5. Uninstalling
2.2.6. Limitations of User Install
2.2.7. Troubleshooting
2.2.8. Next Steps
2.3. Developer Installation
2.3.1. Overview
2.3.2. Step 1: Clone the Repository
2.3.3. Step 2: Set Up Python Environment
2.3.4. Step 3: Verify Installation
2.3.5. Directory Structure After Installation
2.3.6. Updating
2.3.7. Common Developer Tasks
2.3.8. Troubleshooting
2.3.9. Next Steps
2.4. Windows Setup
2.4.1. Option 1: Native Windows Installation
2.4.2. Option 2: WSL2 (Recommended for Full Compatibility)
2.4.2.1. Network and Proxy Configuration in WSL (Optional)
2.4.2.2. Docker in WSL (Optional)
2.4.2.3. USB Device Access from WSL
2.4.2.4. VS Code with WSL
2.4.3. Path Configuration
2.4.4. Common Windows Issues
2.4.5. GPU Support on Windows
2.4.6. WSL2 vs Native Windows Comparison
2.4.7. Next Steps
2.5. Linux Setup
2.5.1. System Preparation
2.5.2. Installing Python 3.10
2.5.3. Installation
2.5.4. Verification
2.5.5. GPU Setup (Optional)
2.5.6. Shell Configuration
2.5.7. Permission Issues
2.5.8. Multiple Python Versions
2.5.9. System Service (Optional)
2.5.10. Troubleshooting
2.5.11. Next Steps
2.6. Environment Variables
2.6.1. Required Variables by Device Family
2.6.1.1. C2000 Devices (F28P55, F28P65, F2837, etc.)
2.6.1.2. MSPM0 Devices (MSPM0G3507, MSPM0G5187, etc.)
2.6.1.3. AM26x Devices (AM263, AM261, etc.)
2.6.2. Setting Environment Variables
2.6.3. Installing TI Tools
2.6.4. Verifying Configuration
2.6.5. Troubleshooting
2.6.6. Next Steps
2.7. Quick Start
3. Getting Started
3.1. Quickstart
3.1.1. Prerequisites
3.1.2. Step 1: Navigate to ModelZoo
3.1.3. Step 2: Run Hello World Example
3.1.4. Step 3: View Results
3.1.5. Step 4: Understand the Config
3.1.6. Step 5: Try a Different Example
3.1.7. Expected Results
3.1.8. Next Steps
3.2. First Example
3.2.1. Overview
3.2.2. Step 1: Examine the Configuration
3.2.3. Step 2: Run Training
3.2.4. Step 3: Understand Training Output
3.2.5. Step 4: Examine Output Files
3.2.6. Step 5: Analyze Results
3.2.7. Step 6: Customize the Example
3.2.8. Next Steps
3.3. Understanding Config
3.3.1. Configuration Overview
3.3.2. Common Section
3.3.3. Dataset Section
3.3.4. Data Processing & Feature Extraction Section
3.3.5. Training Section
3.3.6. Testing Section
3.3.7. Compilation Section
3.3.8. Complete Example
3.3.9. Tips
3.3.10. See Also
3.4. Running Examples
3.4.1. Finding Examples
3.4.2. Running an Example
3.4.3. Example Directory Structure
3.4.4. Understanding Example Configs
3.4.5. Customizing Examples
3.4.6. Output Location
3.4.7. Example Categories
3.4.8. Running Multiple Examples
3.4.9. Troubleshooting Examples
3.4.10. Next Steps
3.5. 5-Minute Quickstart
4. Supported Task Types
4.1. Time Series Classification
4.1.1. Overview
4.1.2. Configuration
4.1.3. Dataset Format
4.1.4. Available Models
4.1.5. Feature Extraction
4.1.6. Metrics
4.1.7. Example: Arc Fault Detection
4.1.8. Tips
4.1.9. See Also
4.2. Time Series Regression
4.2.1. Overview
4.2.2. Configuration
4.2.3. Dataset Format
4.2.4. Available Models
4.2.5. Key Configuration
4.2.6. Metrics
4.2.7. Example: Torque Measurement
4.2.8. Tips
4.2.9. See Also
4.3. Time Series Forecasting
4.3.1. Overview
4.3.2. Configuration
4.3.3. Key Parameters
4.3.4. Dataset Format
4.3.5. Available Models
4.3.6. Windowing Example
4.3.7. Metrics
4.3.8. Example: PMSM Temperature Forecasting
4.3.9. Important Notes
4.3.10. Tips
4.3.11. See Also
4.4. Anomaly Detection
4.4.1. Overview
4.4.2. How It Works
4.4.3. Autoencoder Architecture
4.4.3.1. What is an Autoencoder?
4.4.3.2. Architecture Diagram
4.4.3.3. Dimensionality Flow
4.4.3.4. Key Components
4.4.3.5. Training with MSE Loss
4.4.3.6. Semi-Supervised Learning: Normal Data Only
4.4.3.7. Advantages and Limitations
4.4.4. Configuration
4.4.5. Dataset Format for Anomaly Detection
4.4.5.1. Folder Structure
4.4.5.2. Concrete Example
4.4.5.3. Data Splitting Strategy
4.4.5.4. Datafile Format (CSV)
4.4.6. Dataset Format
4.4.7. Available Models
4.4.8. Training Workflow
4.4.8.1. Overview
4.4.8.2. Pipeline Steps in Detail
4.4.8.3. Running the Pipeline
4.4.8.4. Output Files
4.4.8.5. Understanding Training Logs
4.4.9. Threshold Selection
4.4.9.1. Formula
4.4.9.2. k-Value Impact
4.4.9.3. threshold_performance.csv
4.4.9.4. How to Choose k Based on Application
4.4.10. Evaluation Metrics
4.4.10.1. Confusion Matrix
4.4.10.2. Metric Definitions
4.4.10.3. Metric Summary
4.4.11. What If You Don’t Have Anomaly Data?
4.4.11.1. What You CAN Do
4.4.11.2. What You CANNOT Do
4.4.12. Semi-Supervised vs Supervised
4.4.13. Example: Motor Bearing Anomaly
4.4.14. Tips
4.4.15. See Also
4.5. Image Classification
4.5.1. Overview
4.5.2. Configuration
4.5.3. Dataset Format
4.5.4. Available Models
4.5.5. Current Limitations
4.5.6. Example: MNIST Digit Recognition
4.5.7. Preparing Image Data
4.5.8. Tips
4.5.9. See Also
4.6. Task Overview
4.7. Choosing the Right Task
5. Bring Your Own Data
5.1. Classification Dataset Format
5.1.1. Directory Structure
5.1.2. Data File Format
5.1.3. Supported File Types
5.1.4. Annotations (Optional)
5.1.5. Configuration
5.1.6. Dataset Splitting Modes
5.1.7. Example: 3-Class Vibration Data
5.1.8. Class Balancing
5.1.9. Common Issues
5.1.10. Key Differences: Classification vs Regression vs Forecasting
5.2. Regression Dataset Format
5.2.1. Directory Structure
5.2.2. Data File Format
5.2.3. Time Column Handling
5.2.4. Annotation Files (Required)
5.2.5. Configuration
5.2.6. Target Processing
5.2.7. Complete Example
5.2.8. Troubleshooting
5.2.9. Best Practices
5.2.10. Common Issues
5.3. Forecasting Dataset Format
5.3.1. Directory Structure
5.3.2. Data File Format
5.3.3. Key Difference from Regression
5.3.4. Configuration
5.3.5. Variable Specification Options
5.3.6. Windowing Behavior
5.3.7. Complete Example
5.3.8. Important Notes
5.3.9. Minimum Data Requirements
5.3.10. Common Issues
5.4. Anomaly Detection Dataset Format
5.4.1. Folder Structure
5.4.2. Concrete Example
5.4.3. Data Splitting Strategy
5.4.4. What If You Don’t Have Anomaly Data?
5.4.5. Datafile Format (CSV)
5.4.6. See Also
5.5. Data Splitting
5.5.1. Split Methods
5.5.2. Configuration
5.5.3. Annotation File Format
5.5.4. Split Examples
5.5.5. When to Use Each Method
5.5.6. Best Practices
5.5.7. Creating Annotation Files
5.5.8. Verifying Splits
5.6. Dataset Format Overview
5.7. Supported File Formats
5.8. Data Sources
6. Supported Devices
6.1. Device Overview
6.1.1. Supported Device Families
6.1.2. Complete Device List
6.1.3. Target Device Configuration
6.1.4. NPU vs Non-NPU Devices
6.1.5. Choosing a Device
6.2. NPU Guidelines
6.2.1. NPU-Enabled Devices
6.2.2. Layer Constraints
6.2.3. Using NPU-Compatible Models
6.2.4. Channel Multiples of 4
6.2.5. Kernel Size Restrictions
6.2.6. Compilation Preset
6.2.7. Custom NPU-Compatible Models
6.2.8. Troubleshooting NPU Compilation
6.2.9. Performance Comparison
6.3. C2000 Family
6.3.1. Overview
6.3.2. Supported Devices
6.3.3. F28P55 (Recommended)
6.3.4. F28P65
6.3.5. F2837
6.3.6. C29x Family (F29H85, F29P58, F29P32)
6.3.7. Typical Applications
6.3.8. Development Tools
6.3.9. Memory Considerations
6.3.10. Next Steps
6.4. MSPM0 Family
6.4.1. Overview
6.4.2. Supported Devices
6.4.3. MSPM0G5187 (NPU-Enabled)
6.4.4. MSPM0G3507
6.4.5. MSPM0G3519
6.4.6. AM13 Family
6.4.7. Power Considerations
6.4.8. Memory Constraints
6.4.9. Typical Applications
6.4.10. Development Tools
6.4.11. Getting Started
6.4.12. Next Steps
6.5. Connectivity Devices
6.5.1. Overview
6.5.2. Supported Devices
6.5.3. CC2755
6.5.4. CC1352
6.5.5. CC1354
6.5.6. CC35X1
6.5.7. AM26x Family
6.5.8. Typical Applications
6.5.9. Power Optimization
6.5.10. Memory Constraints
6.5.11. Development Tools
6.5.12. Wireless Protocol Considerations
6.5.13. Getting Started
6.5.14. Next Steps
6.6. Device Families at a Glance
6.7. Complete Device List
6.8. NPU vs Non-NPU Devices
7. Examples & Applications
7.1. Running an Example
7.2. Generic Examples
7.3. Classification Examples
7.4. Regression Examples
7.5. Forecasting Examples
7.6. Anomaly Detection Examples
7.7. Image Classification Examples
7.7.1. Generic Time Series Classification
7.7.1.1. Overview
7.7.1.2. Running the Example
7.7.1.3. Understanding the Dataset
7.7.1.4. Dataset Format
7.7.1.5. Configuration
7.7.1.6. Feature Extraction
7.7.1.7. Evaluation Metrics
7.7.1.8. Expected Results
7.7.1.9. Output Location
7.7.1.10. Variations to Try
7.7.1.11. Next Steps
7.7.2. Generic Time Series Regression
7.7.2.1. Overview
7.7.2.2. Running the Example
7.7.2.3. Understanding the Dataset
7.7.2.4. Dataset Format
7.7.2.5. Configuration
7.7.2.6. Evaluation Metrics
7.7.2.7. Expected Results
7.7.2.8. Output Location
7.7.2.9. Next Steps
7.7.3. Generic Time Series Forecasting
7.7.3.1. Overview
7.7.3.2. Running the Example
7.7.3.3. Understanding the Dataset
7.7.3.4. Dataset Format
7.7.3.5. Configuration
7.7.3.6. Target Variables
7.7.3.7. Evaluation Metrics
7.7.3.8. Expected Results
7.7.3.9. Output Location
7.7.3.10. Next Steps
7.7.4. Generic Time Series Anomaly Detection
7.7.4.1. Overview
7.7.4.2. Running the Example
7.7.4.3. Understanding the Dataset
7.7.4.4. Dataset Format
7.7.4.5. Configuration
7.7.4.6. How Autoencoder Detection Works
7.7.4.7. Why Frame Size Matters
7.7.4.8. Expected Results
7.7.4.9. Output Location
7.7.4.10. Next Steps
7.7.5. Arc Fault Detection
7.7.5.1. Overview
7.7.5.2. Why Arc Fault Detection?
7.7.5.3. Running the Example
7.7.5.4. Configuration
7.7.5.5. Dataset Description
7.7.5.6. Feature Extraction
7.7.5.7. Available Models
7.7.5.8. Expected Results
7.7.5.9. Interpreting Results
7.7.5.10. Deployment Considerations
7.7.5.11. AC Arc Fault Detection
7.7.5.12. Next Steps
7.7.6. AC Arc Fault
7.7.6.1. Overview
7.7.6.2. Configuration
7.7.6.3. Running the Example
7.7.6.4. Dataset Details
7.7.6.5. Recommended Models
7.7.6.6. See Also
7.7.7. Motor Bearing Fault
7.7.7.1. Overview
7.7.7.2. Fault Classes
7.7.7.3. Running the Example
7.7.7.4. Configuration
7.7.7.5. Dataset Description
7.7.7.6. Feature Extraction Presets
7.7.7.7. Available Models
7.7.7.8. Expected Results
7.7.7.9. Multi-Class Evaluation
7.7.7.10. Dataset Quality Analysis
7.7.7.11. Practical Considerations
7.7.7.12. Anomaly Detection Alternative
7.7.7.13. Next Steps
7.7.8. Blower Imbalance
7.7.8.1. Overview
7.7.8.2. Configuration
7.7.8.3. Running the Example
7.7.8.4. Dataset Details
7.7.8.5. Recommended Models
7.7.8.6. See Also
7.7.9. Fan Blade Fault Classification
7.7.9.1. Overview
7.7.9.2. Demo Setup
7.7.9.3. Fault Types
7.7.9.4. Configuration
7.7.9.5. Running the Example
7.7.9.6. Dataset Details
7.7.9.7. Results and Analysis
7.7.9.8. Anomaly Detection Variant
7.7.9.9. See Also
7.7.10. Electrical Fault
7.7.10.1. Overview
7.7.10.2. Configuration
7.7.10.3. Running the Example
7.7.10.4. Dataset Details
7.7.10.5. See Also
7.7.11. Grid Stability
7.7.11.1. Overview
7.7.11.2. Configuration
7.7.11.3. Running the Example
7.7.11.4. Dataset Details
7.7.11.5. See Also
7.7.12. Gas Sensor
7.7.12.1. Overview
7.7.12.2. Configuration
7.7.12.3. Running the Example
7.7.12.4. Dataset Details
7.7.12.5. Quantization Analysis
7.7.12.6. See Also
7.7.13. Human Activity Recognition
7.7.13.1. Overview
7.7.13.2. Configuration
7.7.13.3. Running the Example
7.7.13.4. Dataset Details
7.7.13.5. See Also
7.7.14. ECG Classification
7.7.14.1. Overview
7.7.14.2. Key Performance Targets
7.7.14.3. System Components
7.7.14.4. Dataset Details
7.7.14.5. Feature Extraction
7.7.14.6. Model
7.7.14.7. Training Configuration
7.7.14.8. Configuration (MSPM0)
7.7.14.9. Running the Example
7.7.14.10. Available Configurations
7.7.14.11. Anomaly Detection Mode
7.7.14.12. References
7.7.14.13. See Also
7.7.15. NILM Appliance Usage Classification
7.7.15.1. Overview
7.7.15.2. Configuration
7.7.15.3. Running the Example
7.7.15.4. Dataset Details
7.7.15.5. On-Device Results
7.7.15.6. PLAID Dataset Variant
7.7.15.7. See Also
7.7.16. PIR Detection
7.7.16.1. Overview
7.7.16.2. Device Support
7.7.16.3. System Components
7.7.16.4. Running the Example
7.7.16.5. Dataset Description
7.7.16.6. Feature Extraction Pipeline
7.7.16.7. Model
7.7.16.8. Expected Results
7.7.16.9. Training Configuration
7.7.16.10. References
7.7.17. Grid Fault Detection
7.7.17.1. Overview
7.7.17.2. Configuration
7.7.17.3. Running the Example
7.7.17.4. Dataset Details
7.7.17.5. See Also
7.7.18. Torque Measurement Regression
7.7.18.1. Overview
7.7.18.2. Configuration
7.7.18.3. Running the Example
7.7.18.4. Dataset Details
7.7.18.5. Recommended Models
7.7.18.6. See Also
7.7.19. Induction Motor Speed Prediction
7.7.19.1. Overview
7.7.19.2. Configuration
7.7.19.3. Running the Example
7.7.19.4. Dataset Details
7.7.19.5. See Also
7.7.20. Washing Machine Regression
7.7.20.1. Overview
7.7.20.2. Configuration
7.7.20.3. Running the Example
7.7.20.4. Dataset Details
7.7.20.5. Results
7.7.20.6. See Also
7.7.21. MOSFET Junction Temperature Prediction
7.7.21.1. Overview
7.7.21.2. Configuration
7.7.21.3. Running the Example
7.7.21.4. Dataset Details
7.7.21.5. On-Device Deployment
7.7.21.6. See Also
7.7.22. PMSM Rotor Forecasting
7.7.22.1. Overview
7.7.22.2. Configuration
7.7.22.3. Running the Example
7.7.22.4. Dataset Details
7.7.22.5. Recommended Models
7.7.22.6. Results
7.7.22.7. See Also
7.7.23. HVAC Indoor Temp Forecast
7.7.23.1. Overview
7.7.23.2. Configuration
7.7.23.3. Running the Example
7.7.23.4. Dataset Details
7.7.23.5. Results
7.7.23.6. See Also
7.7.24. Arc Fault Anomaly Detection Example
7.7.24.1. Overview
7.7.24.2. Running the Example
7.7.24.3. Configuration
7.7.24.4. Dataset Format
7.7.24.5. Available Models
7.7.24.6. Expected Results
7.7.24.7. Threshold Selection
7.7.24.8. Interpreting Outputs
7.7.24.9. Advanced Configuration
7.7.24.10. Practical Applications
7.7.24.11. Comparison with Classification
7.7.24.12. Troubleshooting
7.7.24.13. Next Steps
7.7.25. Forecasting Example
7.7.25.1. Overview
7.7.25.2. When to Use Forecasting
7.7.25.3. Running the Example
7.7.25.4. Configuration
7.7.25.5. How Forecasting Works
7.7.25.6. Dataset Format
7.7.25.7. Available Models
7.7.25.8. Expected Results
7.7.25.9. Key Metrics
7.7.25.10. Forecast Horizon Trade-offs
7.7.25.11. Multi-Variable Forecasting
7.7.25.12. Feature Extraction Options
7.7.25.13. Practical Applications
7.7.25.14. Deployment Considerations
7.7.25.15. Troubleshooting
7.7.25.16. Comparison with Regression
7.7.25.17. Next Steps
7.7.26. Image Classification Example
7.7.26.1. Overview
7.7.26.2. When to Use Image Classification
7.7.26.3. Running the Example
7.7.26.4. Configuration
7.7.26.5. Dataset Format
7.7.26.6. Image Size Considerations
7.7.26.7. Available Models
7.7.26.8. Expected Results
7.7.26.9. Grayscale vs RGB
7.7.26.10. Data Augmentation
7.7.26.11. Practical Applications
7.7.26.12. Memory Constraints
7.7.26.13. Inference Performance
7.7.26.14. Transfer Learning
7.7.26.15. Camera Integration
7.7.26.16. Troubleshooting
7.7.26.17. Limitations
7.7.26.18. Next Steps
7.7.27. MNIST Image Classification
7.7.27.1. Overview
7.7.27.2. Key Targets
7.7.27.3. System Components
7.7.27.4. Running the Example
7.7.27.5. Dataset
7.7.27.6. Feature Extraction
7.7.27.7. Model Architecture
7.7.27.8. Training Configuration
7.7.27.9. Expected Results
7.7.27.10. Supported Devices
7.7.27.11. References
8. Advanced Features
8.1. Neural Architecture Search
8.1.1. Overview
8.1.2. When to Use NAS
8.1.3. Code Flow
8.1.4. Configuration
8.1.5. Model Size Presets
8.1.6. Usage
8.1.7. Running NAS
8.1.8. Example: Full NAS Configuration
8.1.9. Tips
8.1.10. Best Practices
8.1.11. Search Algorithm
8.1.12. Search Space
8.1.13. NAS Framework Internals
8.1.14. References
8.1.15. Next Steps
8.2. Quantization
8.2.1. Overview
8.2.2. Configuration Parameters
8.2.3. Quantization Modes
8.2.4. Quantization Methods
8.2.5. Bit Widths
8.2.6. NPU Quantization Requirements
8.2.7. Output Files
8.2.8. Accuracy Comparison
8.2.9. Troubleshooting Accuracy Loss
8.2.10. Best Practices
8.2.11. Example: Full Quantization Workflow
8.2.12. Memory Savings
8.2.13. Performance Impact
8.2.14. Quantization Wrapper Architecture
8.2.15. NPU Hardware Constraints
8.2.16. Using Quantization Wrappers Directly
8.2.17. Wrapper API Reference
8.2.18. Model Surgery
8.2.19. Next Steps
8.3. Standalone Quantization Examples
8.3.1. Overview
8.3.2. FMNIST Image Classification
8.3.3. Audio Keyword Spotting
8.3.4. Motor Fault Time Series Classification
8.3.5. MNIST Digit Classification
8.3.6. Torque Time Series Regression
8.3.7. Quantization Guidance
8.3.8. Next Steps
8.4. Feature Extraction
8.4.1. Overview
8.4.2. Feature Extraction Pipeline
8.4.3. Configuration Parameters
8.4.4. Preset System
8.4.5. Available Presets
8.4.6. Data Processing Transforms
8.4.7. Feature Extraction Transforms
8.4.8. Custom Feature Extraction
8.4.9. Multi-Channel Data
8.4.10. Forecasting Configuration
8.4.11. Data Augmentation
8.4.12. Choosing the Right Preset
8.4.13. Performance Impact
8.4.14. On-Device Feature Extraction
8.4.15. Example Configurations
8.4.16. Stacking Modes
8.4.17. Gain Variation Augmentation
8.4.18. Q15 Fixed-Point Transforms
8.4.19. Frame Offset (Overlap Control)
8.4.20. Analysis Bandwidth
8.4.21. Feature Extraction Only Mode
8.4.22. Evaluating Feature Extraction Quality
8.4.23. Best Practices
8.4.24. Next Steps
8.5. Goodness of Fit
8.5.1. Overview
8.5.2. Enabling GoF Test
8.5.3. Running the Test
8.5.4. Output Files
8.5.5. Understanding the Visualizations
8.5.6. Interpreting Results
8.5.7. 8-Plot Analysis
8.5.8. Common Patterns
8.5.9. Actionable Insights
8.5.10. Frame Size Sweeping
8.5.11. Multi-Cluster Analysis
8.5.12. Example: Motor Fault GoF Analysis
8.5.13. GoF Without Training
8.5.14. Comparing Feature Extraction
8.5.15. Best Practices
8.5.16. Limitations
8.5.17. Next Steps
8.6. Post-Training Analysis
8.6.1. Overview
8.6.2. Enabling Analysis
8.6.3. Output Files
8.6.4. Confusion Matrix
8.6.5. ROC Curves
8.6.6. Class Score Histograms
8.6.7. FPR/TPR Thresholds
8.6.8. Classification Report
8.6.9. Error Analysis
8.6.10. Quantized vs Float Comparison
8.6.11. File-Level Classification Summary
8.6.12. Regression Analysis
8.6.13. Anomaly Detection Analysis
8.6.14. Custom Analysis Scripts
8.6.15. Generating Reports
8.6.16. Example: Complete Analysis Configuration
8.6.17. Best Practices
8.6.18. Troubleshooting Low Accuracy
8.6.19. Next Steps
8.7. Feature Overview
9. Device Deployment
9.1. CCS Integration Guide
9.1.1. Prerequisites
9.1.2. Compilation Output
9.1.3. Creating a CCS Project
9.1.4. Integration Code
9.1.5. Memory Placement
9.1.6. Linker Command File
9.1.7. Interrupt-Based Inference
9.1.8. Timing and Profiling
9.1.9. Debugging
9.1.10. Build Configurations
9.1.11. Example Project Structure
9.1.12. Common Issues
9.1.13. Testing on Hardware
9.1.14. Next Steps
9.2. NPU Device Deployment
9.2.1. NPU-Enabled Devices
9.2.2. NPU Compilation
9.2.3. NPU Model Requirements
9.2.4. NPU Compilation Artifacts
9.2.5. NPU Initialization
9.2.6. NPU Inference Code
9.2.7. NPU Memory Management
9.2.8. NPU Performance
9.2.9. NPU Power Considerations
9.2.10. NPU Debugging
9.2.11. NPU Error Handling
9.2.12. CCS Project Setup for NPU
9.2.13. Example: Arc Fault on F28P55 NPU
9.2.14. Troubleshooting NPU Issues
9.2.15. CCS Studio Walkthrough: F28P55x
9.2.15.1. Step 1 – Load the Example from Resource Explorer
9.2.15.2. Step 2 – Build the Project
9.2.15.3. Step 3 – Set Target Configuration
9.2.15.4. Step 4 – Flash the Device
9.2.15.5. Step 5 – Debug and Verify
9.2.16. Required Files from ModelMaker
9.2.17. Model Performance Profiling
9.2.18. Next Steps
9.3. Non-NPU Deployment
9.3.1. Non-NPU Devices
9.3.2. Configuration
9.3.3. Model Selection
9.3.4. CPU Inference Performance
9.3.5. Compilation Artifacts
9.3.6. CCS Project Setup
9.3.7. Basic Integration
9.3.8. Optimizing CPU Inference
9.3.9. Memory Optimization
9.3.10. Power Optimization
9.3.11. Real-Time Considerations
9.3.12. Device-Specific Notes
9.3.13. Example: Vibration Monitoring on MSPM0G3507
9.3.14. CCS Studio Walkthrough: F28004x
9.3.14.1. Step 1 – Import the Project Manually
9.3.14.2. Step 2 – Build the Project
9.3.14.3. Step 3 – Set Target Configuration
9.3.14.4. Step 4 – Flash the Device
9.3.14.5. Step 5 – Debug and Verify
9.3.15. Required Files from ModelMaker
9.3.16. Comparison: NPU vs Non-NPU
9.3.17. Next Steps
9.4. Supported Devices
9.5. Deployment Overview
9.6. Prerequisites
9.7. Output File Locations
9.8. CCS Example Project Locations
9.9. Task-Type-Specific Deployment Notes
9.10. Testing Multiple Cases
9.11. Model Compilation Details
10. Edge AI Studio Model Composer
10.1. Model Composer Overview
10.1.1. What is Model Composer?
10.1.2. Model Composer vs CLI
10.1.3. Accessing Model Composer
10.1.4. Key Features
10.1.5. Supported Workflows
10.1.6. System Requirements
10.1.7. Limitations
10.1.8. Integration with CLI
10.1.9. Getting Started
10.2. Getting Started (GUI)
10.2.1. Prerequisites
10.2.2. Step 1: Access Model Composer
10.2.3. Step 2: Create a Project
10.2.4. Step 3: Upload Dataset
10.2.5. Step 4: Configure Feature Extraction
10.2.6. Step 5: Configure Training
10.2.7. Step 6: Start Training
10.2.8. Step 7: Analyze Results
10.2.9. Step 8: Evaluate on Test Set
10.2.10. Step 9: Export Model
10.2.11. Using Exported Model
10.2.12. GUI Tips and Tricks
10.2.13. Troubleshooting
10.2.14. What’s Next?
10.2.15. Additional Resources
10.3. Exporting Models
10.3.1. Export Options
10.3.2. Exporting as CCS Project
10.3.3. Exporting Artifacts Only
10.3.4. Exporting ONNX Model
10.3.5. Exporting Configuration
10.3.6. Batch Export
10.3.7. Export History
10.3.8. Validating Exports
10.3.9. Export Troubleshooting
10.3.10. Best Practices
10.3.11. Next Steps
10.4. What is Model Composer?
10.5. GUI vs CLI Comparison
11. Bring Your Own Model
11.1. Adding Custom Models
11.1.1. Overview
11.1.2. Quick Summary
11.1.3. Step 1: Choose the Right Model File
11.1.4. Step 2: Create Your Model Class
11.1.4.1. Option A: Spec-Based Model (Recommended)
11.1.4.2. Option B: Custom PyTorch Model
11.1.4.3. NPU-Compatible Model
11.1.5. Available Layer Types
11.1.6. Step 3: Add to
__all__
11.1.7. Step 4: Verify Your Model
11.1.8. Step 5 (Optional): Add Device Performance Info
11.1.9. Step 6 (Optional): Add GUI Model Description
11.1.10. Using Your Custom Model
11.1.11. Naming Conventions
11.1.12. Configuring Model Layer Parameters
11.1.13. Troubleshooting
11.1.14. Summary Checklist
11.1.15. Next Steps
11.2. Compilation Only
11.2.1. Overview
11.2.2. ONNX Model Requirements
11.2.3. Compilation Configuration
11.2.4. Running Compilation
11.2.5. Model Formats
11.2.6. Output Artifacts
11.2.7. NPU Compilation
11.2.8. Example: External PyTorch Model
11.2.9. Example: TensorFlow Model
11.2.10. Troubleshooting
11.2.11. Best Practices
11.2.12. Next Steps
11.3. Two Approaches
11.4. Model Requirements
12. Troubleshooting
12.1. Common Errors
12.1.1. Installation Errors
12.1.2. Dataset Errors
12.1.3. Training Errors
12.1.4. Compilation Errors
12.1.5. Quantization Errors
12.1.6. Deployment Errors
12.1.7. Configuration Errors
12.1.8. Getting Help
12.2. FAQ
12.2.1. General Questions
12.2.2. Installation Questions
12.2.3. Dataset Questions
12.2.4. Model Questions
12.2.5. Training Questions
12.2.6. Deployment Questions
12.2.7. Advanced Questions
12.2.8. Support Questions
12.2.9. Still Have Questions?
12.3. Quick Fixes
12.4. Getting Help
13. Appendix
13.1. Configuration Reference
13.1.1. Configuration File Structure
13.1.2. Common Section
13.1.3. Dataset Section
13.1.4. Feature Extraction Section
13.1.5. Training Section
13.1.6. Testing Section
13.1.7. NAS Options (under
training
section)
13.1.8. Compilation Section
13.1.9. BYOM Section
13.1.10. Complete Example
13.2. Model Zoo Reference
13.2.1. Classification Models
13.2.1.1. Standard Classification
13.2.1.2. NPU Classification
13.2.1.3. Application-Specific Classification (Edge AI Studio Only)
13.2.2. Regression Models
13.2.2.1. Standard Regression
13.2.2.2. NPU Regression
13.2.3. Anomaly Detection Models
13.2.3.1. Standard AD
13.2.3.2. NPU AD
13.2.4. Forecasting Models
13.2.4.1. Standard Forecasting
13.2.4.2. NPU Forecasting
13.2.5. Image Classification Models
13.2.6. Model Selection Guide
13.2.7. Model Architecture Details
13.2.8. Using Models
13.3. Changelog
13.3.1. Version 1.3.0
13.3.2. Version 1.1.0
13.3.3. Version 1.0.0
13.3.4. Migration Guides
13.3.4.1. Migrating from 1.1.x to 1.2.x
13.3.5. Deprecation Notices
13.3.6. Known Issues
13.3.7. Roadmap
13.3.8. Contributing
13.4. Quick Reference
13.5. External Links
TI Tiny ML Tensorlab
Index
Index
A
|
B
|
C
|
F
|
G
|
I
|
L
|
M
|
N
|
O
|
P
|
Q
|
S
|
T
|
V
A
AM13
Autoencoder
B
BYOD
BYOM
C
C2000
C2000Ware
CCS
F
Feature Extraction
feature_extraction_name
frame_size
G
GoF Test
I
Inference
L
LaunchPad
M
MACs
model_name
ModelMaker
ModelZoo
MSPM0
MSPM33C
N
NAS
NNC
NPU
O
ONNX
P
PTQ
Q
QAT
Quantization
S
stride_size
T
target_device
task_type
TINPU
TinyVerse
V
variables