3.15. Machine Learning¶
Sitara Machine Learning toolkit brings machine learning to the edge by enabling machine learning inference on all Sitara devices (Arm only, Arm + specialized hardware accelerators). It is provided as part of TI’s Processor SDK Linux, free to download and use. Sitara machine learning today consists of TI Deep Learning (TIDL), Neo-AI-DLR, TVM runtime, TensorFlow Lite, Arm NN, and RNN library.
- Accelerates deep learning inference on C66x DSP cores and/or on Embedded Vision Engine (EVE) subsystems.
- Available on AM57x device only.
- Supports CNN at the moment, and imports Caffe, ONNX, and Tensorflow models.
Neo-AI Deep Learning Runtime (DLR)
- Neo-AI-DLR is a new open source machine learning runtime for on-device inference.
- Supports Keras, Tensorflow, TFLite, GluonCV, MXNet, Pytorch, ONNX, and XGBoost models optimized automatically by Amazon SageMaker Neo or TVM compiler.
- Supports all Cortex-A ARM cores (AM3x, AM4x, AM5x, AM6x Sitara devices).
- On AM5729 and AM5749 devices, uses TIDL to accelerate supported models automatically.
- Open source deep learning runtime for on-device inference, supporting models compiled by TVM compiler.
- Available on all Cortex-A ARM cores (AM3x, AM4x, AM5x, AM6x Sitara devices).
- Open source deep learning runtime for on-device inference.
- Runs on all Cortex-A ARM cores (AM3x, AM4x, AM5x, AM6x Sitara devices).
- Imports Tensorflow Lite models.
- Uses TIDL import tool to create TIDL offloadable Tensorflow Lite models, which can be executed via Tensorflow Lite runtime with TIDL acceleration on AM5729 and AM5749 devices.
- Open source inference engine available from Arm.
- Runs on all Cortex-A ARM cores (AM3x, AM4x, AM5x, AM6x Sitara devices).
- Imports Caffe, ONNX, TensorFlow, and TensorFlow Lite models.
- Provides Long Short-Term Memory (LSTM) and fully connected layers in a standalone library to allow for rapid prototyping of inference applications that require Recurrent Neural Networks.
- Runs on all Cortex-A ARM cores (AM3x, AM4x, AM5x, AM6x Sitara devices).
- Integrated into TI’s Processor SDK Linux in an OOB demo for Predictive Maintenance.
- 3.15.1. TI Deep Learning (TIDL)
- 3.15.1.1. Introduction
- 3.15.1.2. Verified networks topologies
- 3.15.1.3. Examples and Demos
- 3.15.1.4. Developer’s guide
- 3.15.1.4.1. Software Stack
- 3.15.1.4.2. Additional public TI resources
- 3.15.1.4.3. Introduction to Programming Model
- 3.15.1.4.4. Target file-system
- 3.15.1.4.5. Input data format
- 3.15.1.4.6. Output data format
- 3.15.1.4.7. Import Process
- 3.15.1.4.8. Verifying TIDL inference result
- 3.15.1.4.9. Parameters controling dynamic quantization
- 3.15.1.4.10. Importing Tensorflow Models
- 3.15.1.4.11. Importing Caffe Models
- 3.15.1.4.12. Viewer tool
- 3.15.1.4.13. Simulation Tool
- 3.15.1.4.14. Summary of model porting steps
- 3.15.1.5. Compatibility of trained model formats
- 3.15.1.6. Training
- 3.15.1.7. Performance data
- 3.15.1.8. Multi core performance (EVE and DSP cores only)
- 3.15.1.9. Troubleshooting
- 3.15.2. Neo-AI Deep Learning Runtime
- 3.15.3. TVM Runtime
- 3.15.4. TensorFlow Lite
- 3.15.5. Arm NN and Arm Compute Library