TI Deep Learning Product User Guide
TIDL - TI Deep Learning Product

TIDL is a comprehensive software product for acceleration of Deep Neural Networks (DNNs) on TI's embedded devices. It supports heterogeneous execution of DNNs across cortex-A based MPUs, TI’s latest generation C7x DSP and TI's DNN accelerator (MMA). TIDL is released as part of TI's Software Development Kit (SDK) along with additional computer vision functions and optimized libraries including OpenCV. TIDL is available on a variety of embedded devices from Texas Instruments as shown below:

Device SDK
TDA4VM Processor SDK RTOS
Processor SDK Linux for Edge AI

TIDL is a fundamental software component of TI’s Edge AI solution. TI's Edge AI solution simplifies the whole product life cycle of DNN development and deployment by providing a rich set of tools and optimized libraries. DNN based product development requires two main streams of expertise:

  • Data Scientists, who can design and train DNNs for targeted applications
  • Embedded System Engineers, who can design and develop inference solutions for real time execution of DNNs on low power embedded device

TI's Edge AI solution provides the right set of tools for both of these categories:

  • Model zoo: A large collection of pre-trained models for data scientists, which along with TI's Model Selection Tool enables picking the ideal model for TI's embedded devices
  • Training and quantization tools for popular frameworks, allowing data scientists to make DNNs more suitable for TI devices
  • Edge AI Cloud: A free online service that lets you evaluate accelerated deep learning inference on TI devices from your browser in minutes
  • Edge AI Benchmark: A python based framework which can allow you to perform accuracy and performance benchmark. Accuracy benchmark can be performed without development board, but for performance benchmark, a development board is needed.
  • TIDL: Optimized inference solutions primarily targeted for compilation and deployment of pre-trained models. Model compilation happens on X86 machine and associated tools and examples are provided in Edge AI TIDL Tools. Model inference can happen on X86 machine (host emulation mode) or on development board with TI SOC. Edge AI TIDL Tools also provides examples to be directly used on X86 target and same can be used on development board with TI SOC. For deployment and execution on the development board, one has to use this package.

The figure below illustrates the work flow of DNN development and deployment on TI devices:

dnn-workflow.png

TIDL provides multiple deployment options with industry defined inference engines as listed below. These inference engines are being referred as Open Source Run Times (OSRT) in this document.

  • TFLite Runtime: TensorFlow Lite based inference with heterogeneous execution on cortex-A** + C7x-MMA,refer Tensorflow Lite runtime in Edge AI TIDL Tools for more details
  • ONNX RunTime: ONNX Runtime based inference with heterogeneous execution on cortex-A** + C7x-MMA, refer ONNX Runtime in Edge AI TIDL Tools for more details.
  • TVM/Neo-AI RunTime: TVM/Neo-AI-DLR based inference with heterogeneous execution on cortex-A** + C7x-MMA, refer TVM/Neo-AI-DLR in Edge AI TIDL Tools for more details.

TIDL also provides an openVX based inference solution, being referred as TIDL-RT in this document. It supports execution of DNNs only on C7x-MMA and DNNs have to be constructed using the operators supported by TIDL-RT. OSRT makes use of TIDL-RT as part of its backend to offload sub graph(s) to C7x-MMA.

We recommend users to use OSRT for a better user experience and a richer coverage of neural networks. A comparison table with more criterias is provided below

Criteria TIDL-RT OSRT
Operator Coverage ~40 accelerated operatorsAll the operators supported by TFLite and ONNX
Inference Speed Best Similar to TIDL-RT if entire DNN is offloaded to C7x-MMA
Application InterfaceC/C++ C/C++ and Python
Ease of UseGoodBetter than TIDL-RT due to (A) Higher operator coverage (B) Industry standard APIs (C) Python support
PortabilityPortable to any TI SOC with TIDL product supportTI SOC, non TI SOC with OSRT support enabled
Operating systemLinux (OOB offering), Minimal dependency on other HLOS so easy to migrateLinux (OOB offering), Requires porting of open source run time engines (e.g. TFLiteRT, ONNXRT etc) to 3P OS from OS vendor
SafetySuitableDepends on (A) Proven in use for OSRT components and (B) Availability of these components by 3P OS vendors for safe OS

** TDA4VM has cortex-A72 as its MPU, refer to the device TRM to know which cortex-A MPU it contains.