3.12. Machine Learning
Sitara Machine Learning toolkit brings machine learning to the edge by enabling machine learning inference on all Sitara devices (Arm-only, Arm + GPU, and Arm + specialized hardware accelerators). It is provided as part of TI’s Processor SDK Linux, free to download and use. Sitara machine learning today consists of ONNX Runtime, TensorFlow Lite, Arm NN, NNStreamer, and RNN library.
![]()
Fig. 3.5 Sitara Machine Learnining Offering
Open source deep learning runtime for on-device inference.
Runs on all Cortex-A ARM cores (AM3x, AM4x, AM6x Sitara devices).
Imports Tensorflow Lite models.
Uses TIDL import tool to create TIDL offloadable Tensorflow Lite models, which can be executed via Tensorflow Lite runtime with TIDL acceleration.
Open source inference engine available from Arm.
Runs on all Cortex-A ARM cores (AM3x, AM4x, AM6x Sitara devices).
Open source inference engine available from Arm.
Runs on all Cortex-A ARM cores (AM3x, AM4x, AM6x Sitara devices).
Imports ONNX and TensorFlow Lite models.
Provides TensorFlow Lite delegate.
Open source inference engine available from Arm.
Runs on all Cortex-A ARM cores (AM3x, AM4x, AM6x Sitara devices).
Provides highly optimized kernels for NEON (Advanced SIMD) and CPU acceleration.
Used as a backend to accelerate ML frameworks like Arm NN.
Open source framework based on GStreamer for neural network pipelines.
Runs on all Cortex-A ARM cores (AM3x, AM4x, AM6x Sitara devices).
Supports many backends such as TensorFlow Lite and Arm NN.
Enables easy integration of ML inference into streaming pipelines.
ML inference Library |
Version |
Delegate / Execution provider |
Python API |
C/C++ API |
|---|---|---|---|---|
TensorFlow Lite |
2.20.0 |
CPU, XNNPACK, ARMNN |
Yes |
Yes |
ONNX Runtime |
1.23.2 |
CPU, ACL |
Yes |
Yes |
Arm NN |
26.01 |
ACL |
Yes |
Yes |
Arm Compute Library |
52.7.0 |
NA (Backend Library) |
Yes |
Yes |
NNStreamer |
2.6.0 |
NA (Pipeline Framework) |
Yes |
Yes |