TI Deep Learning Product User Guide
TIDL - TI Deep Learning Product

TIDL is a comprehensive software product for acceleration of Deep Neural Networks (DNNs) on TI's embedded devices. It supports heterogeneous execution of DNNs across cortex-A based MPUs, TI’s latest generation C7x DSP and TI's DNN accelerator (MMA). TIDL is released as part of TI's Software Development Kit (SDK) along with additional computer vision functions and optimized libraries including OpenCV. TIDL is available on a variety of embedded devices from Texas Instruments as shown below:

Device SDK
TDA4VM Processor SDK RTOS
EdgeAI Development Kit

TIDL is a fundamental software component of TI’s Edge AI solution. TI's Edge AI solution simplifies the whole product life cycle of DNN development and deployment by providing a rich set of tools and optimized libraries. DNN based product development requires two main streams of expertise:

  • Data Scientists, who can design and train DNNs for targeted applications
  • Embedded System Engineers, who can design and develop inference solutions for real time execution of DNNs on low power embedded device

TI's Edge AI solution provides the right set of tools for both of these categories:

  • Model zoo: A large collection of pre-trained models for data scientists, which along with TI's Model Selection Tool enables picking the ideal model for TI's embedded devices
  • Training and quantization tools for popular frameworks, allowing data scientists to make DNNs more suitable for TI devices
  • EdgeAI Cloud: A free online service that lets you evaluate accelerated deep learning inference on TI devices from your browser in minutes
  • EdgeAI Benchmark: A python based framework which can allow you to perform accuracy and performance benchmark. Accuracy benchmark can be performed without development board, but for performance benchmark, a development board is needed.
  • TIDL: Optimized inference solutions primarily targeted for deployment of pre-trained models, this User guide focuses on TIDL. Both EdgeAI Cloud and EdgeAI Benchmark depends on TIDL

The figure below illustrates the work flow of DNN development and deployment on TI devices:

dnn-workflow.png

TIDL provides multiple deployment options with industry defined inference engines as listed below. These inference engines are being referred as Open Source Run Times (OSRT) in this document.

TIDL also provides an openVX based inference solution, being referred as TIDL-RT in this document. It supports execution of DNNs only on C7x-MMA and DNNs have to be constructed using the operators supported by TIDL-RT. OSRT makes use of TIDL-RT as part of its backend to offload sub graph(s) to C7x-MMA.

We recommend users to use OSRT for a better user experience and a richer coverage of neural networks.

** TDA4VM has cortex-A72 as its MPU, refer to the device TRM to know which cortex-A MPU it contains.