TI Deep Learning Product User Guide
Getting Started with TIDL-RT

Introduction

Important Note: Readers are advised to read TIDL Product summary and TIDL-RT overview before reading this page.

This page is targeted towards readers who have TIDL already downloaded and want to work with TIDL-RT which is the low level user interface of TIDL. Please note that TIDL provides a higher level software interface via open source runtimes as described in Open Source Runtime. Users are highly recommended to work with the open source run time interface.

Note
Before proceeding, please make sure that

  • You can see the software package in your system as described in TIDL package contents.
  • You have downloaded all the dependencies as described.
  • If you are not familiar with concepts of deep learning or machine learning, and if this is your first experience with convolution neural networks (CNNs), it is recommended that you get started here.

Using TIDL-RT, you are expected to be able to:

  • Import trained network models into .bin files that can be used by TIDL-RT Inference.
    • tidlModelImport converts networks trained via open source frameworks (like Caffe or TensorFlow) into a format that TIDL-RT Inference can use to execute these networks for inference.
    • tidlModelImport uses the quantization statistics tool internally to measure any deviation in inference accuracies and layer level outputs arising due to quantization.
    • tidlModelImport uses the graph compiler tool internally to generate optimized execution order and dataflow sequences to maximize inference performance.
  • Execute the network on PC using the imported .bin files and validate the results.
  • Execute the network on development board using the imported .bin files and validate the results.

Note
The following model formats are currently supported:

  • Caffe models (using .caffemodel and .prototxt files)
  • Tensorflow models (using .pb or .tflite files)
  • ONNX models (.onnx files)

This guide demonstrates the above features and documents the steps that shall help you get started on TIDL-RT inference.

Setting up the environment

Follow the steps defined in Dependent Software Components section

Importing models

This section elaborates further on the import process with examples on importing models trained with Caffe and TensorFlow into TIDL-RT. The following models will be used in this guide:

  • MobileNetV2 TensorFlow model for image classification
  • PeleeNet Caffe model for object detection
  • JSegNet21V2 Caffe model for semantic segmentation

Note
The installation does not create the directories required for storing the downloaded models mentioned in this guide. You need to create the directories as required, if they are not already present.

Note
The paths used in this guide are for the purpose of demonstration. You can create your own configuration file (containing the appropriate paths for collecting input models and storing output files), and pass that file's location as an argument to the tidlModelImport tool.

Importing MobileNetV2 model for image classification

Downloading the model

Download the tarball containing the trained model from here.

You need to extract mobilenet_v2_1.0_224_frozen.pb from the tarball and place it in the ti_dl/test/testvecs/models/public/tensorflow/mobilenet_v2 directory.

The downloaded TensorFlow models cannot be imported until they are optimized for inference. Execute optimize_for_inference.py (distributed with TensorFlow installation) to create an optimized model file.

user@ubuntu-pc$ python optimize_for_inference.py \
--input=${TIDL_INSTALL_PATH}/ti_dl/test/testvecs/models/public/tensorflow/mobilenet_v2/mobilenet_v2_1.0_224_frozen.pb \
--output=${TIDL_INSTALL_PATH}/ti_dl/test/testvecs/models/public/tensorflow/mobilenet_v2/mobilenet_v2_1.0_224_final.pb \
--input_names="input" \
--output_names="MobilenetV2/Predictions/Softmax"

Importing the model

To import models using the tidlModelImport tool, you need to use a configuration file that provides the import parameters to the tool. The various parameters and the supported values for each parameters are documented here.

You can use the configuration file ti_dl/test/testvecs/config/import/public/tensorflow/tidl_import_mobileNetv2.txt (distributed with the installation) to import this model.

Execute the import tool to import the model.

For Linux Users

user@ubuntu-pc$ cd ${TIDL_INSTALL_PATH}/ti_dl/utils/tidlModelImport
user@ubuntu-pc$ ./out/tidl_model_import.out ${TIDL_INSTALL_PATH}/ti_dl/test/testvecs/config/import/public/tensorflow/tidl_import_mobileNetv2.txt --numParamBits 15

For Windows Users

C:\> cd %TIDL_INSTALL_PATH%\ti_dl\utils\tidlModelImport
C:\> out/tidl_model_import.out.exe %TIDL_INSTALL_PATH%\ti_dl\test\testvecs\config\import\public\tensorflow\tidl_import_mobileNetv2.txt  --numParamBits 15

Note
MobileNetV2 trained on TensorFlow needs 16 bits for better accuracy. Therefore, it is required to override the default value of numParamBits while importing.

The import tool will perform quantization and carry out graph compilation and generate the following files:

  • Compiled network and I/O .bin files used for inference
    • Compiled network file in ti_dl/test/testvecs/config/tidl_models/tensorflow/tidl_net_mobilenet_v2_1.0_224.bin containing the layers in the order of execution, and the layers parameters (weights , bias etc).
    • Compiled I/O file in ti_dl/test/testvecs/config/tidl_models/tensorflow/tidl_net_mobilenet_v2_1.0_2241.bin containing the dataflow sequences.

If you built the tidlModelGraphviz tool as described in Dependent Software Components section, the network graph representation is also generated in ti_dl/test/testvecs/config/tidl_models/tensorflow/tidl_net_mobilenet_v2_1.0_224.bin.svg.

Importing PeleeNet model for object detection

Downloading model

Download the tarball containing the trained model from here.

You need to extract pelee_304x304_acc7094.caffemodel and deploy.prototxt from the tarball and put it inside ti_dl/test/testvecs/models/public/caffe/peele/pelee_voc/ directory.

The downloaded pelee model should be imported with a higher confidence_threshold parameter for better accuracy. Modify the file ti_dl/test/testvecs/models/public/caffe/peele/pelee_voc/deploy.prototxt to use 0.4 as confidence_threshold.

...
keep_top_k: 200
confidence_threshold: 0.4
}
...

Importing the model

After the above changes are made to the model, it can be imported by following the steps similar to the ones described in Importing MobileNetV2 model for image classification. You can use ti_dl/test/testvecs/config/import/public/caffe/tidl_import_peeleNet.txt (distributed with the installation) to import the model.

For Linux Users

user@ubuntu-pc$ cd ${TIDL_INSTALL_PATH}/ti_dl/utils/tidlModelImport
user@ubuntu-pc$ ./out/tidl_model_import.out ${TIDL_INSTALL_PATH}/ti_dl/test/testvecs/config/import/public/caffe/tidl_import_peeleNet.txt

For Windows Users

C:\> cd %TIDL_INSTALL_PATH%\ti_dl\utils\tidlModelImport
C:\> out/tidl_model_import.out.exe %TIDL_INSTALL_PATH%\ti_dl\test\testvecs\config\import\public\caffe\tidl_import_peeleNet.txt

The following files are generated by the import:

  • Compiled network and I/O .bin files used for inference
    • Compiled network file in ti_dl/test/testvecs/config/tidl_models/caffe/tidl_net_peele_300.bin
    • Compiled I/O file in ti_dl/test/testvecs/config/tidl_models/caffe/tidl_io_peele_300_1.bin

If you built the tidlModelGraphviz tool as described in Dependent Software Components section, the network graph representation is also generated in ti_dl/test/testvecs/config/tidl_models/caffe/tidl_net_peele_300.bin.svg.

Importing JSegNet21V2 model for semantic segmentation

Downloading the model

Download the trained model from here and put it in ti_dl/test/testvecs/models/public/caffe/jsegNet21 directory.

Note
Do not use save link as or copy link location options to download binary files from github.com, use the Download button instead.

Download the deploy.prototxt file from here and put it in ti_dl/test/testvecs/models/public/caffe/jsegNet21 directory.

Note
Do not use save link as or copy link location options to download text files from github.com, use the Raw button to open the file and then use save as ... option.

Importing the model

The model can be imported by following steps similar to the the ones described in Importing MobileNetV2 model for image classification. You can use ti_dl/test/testvecs/config/import/public/caffe/tidl_import_jSegNet.txt (distributed with the installation) to import the model.

For Linux Users

user@ubuntu-pc$ cd ${TIDL_INSTALL_PATH}/ti_dl/utils/tidlModelImport
user@ubuntu-pc$ ./out/tidl_model_import.out ${TIDL_INSTALL_PATH}/ti_dl/test/testvecs/config/import/public/caffe/tidl_import_jSegNet.txt

For Windows Users

C:\> cd %TIDL_INSTALL_PATH%/ti_dl/utils/tidlModelImport
C:\> out/tidl_model_import.out.exe %TIDL_INSTALL_PATH%\ti_dl\test\testvecs\config\import\public\caffe\tidl_import_jSegNet.txt

The following files are generated by the import:

  • Compiled network and I/O .bin files used for inference
    • Compiled network file in ti_dl/test/testvecs/config/tidl_models/caffe/tidl_net_jSegNet_1024x512.bin
    • Compiled I/O file in ti_dl/test/testvecs/config/tidl_models/caffe/tidl_io_jSegNet_1024x512_1.bin

If you built the tidlModelGraphviz tool as described in Dependent Software Components section, the network graph representation is also generated at ti_dl/test/testvecs/config/tidl_models/caffe/tidl_net_jSegNet_1024x512.bin.svg.

Executing imported models on a X86 PC

The installation comes with a PC simulation tool ti_dl/test/PC_dsp_test_dl_algo.out that can be used to execute imported .bin files and verify the inference result before running them on a development board. This helps in easily identifying issues with the model and debugging.

The PC simulation tool uses the file ti_dl/test/testvecs/config/config_list.txt to read the list of inference tests to run. The format of the file is as follows:

1 /path/to/inference/parameter/file/to/be/executed/1
1 /path/to/inference/parameter/file/to/be/executed/2
1 /path/to/inference/parameter/file/to/be/executed/3
2 /comment/line/./continue/to/the/next/line
2 /comment/line/./continue/to/the/next/line
1 /path/to/inference/parameter/file/to/be/executed/4
1 /path/to/inference/parameter/file/to/be/executed/5
0 /stop/processing/after/this/line
...

Each line that starts with 1 must point to a file containing the inference parameters to run an imported model. The various parameters and the supported values for each parameters are documented here.

Note
The lines that start with 2 are ignored.
The test sequence stops when it hits a line that starts with 0.

Note
The paths in the file ti_dl/test/testvecs/config/config_list.txt (e.g. /path/to/inference/parameter/file/to/be/executed/1) must be relative to ti_dl/utils/test.
For example, to add the inference parameter file ti_dl/test/testvecs/config/infer/public/caffe/tidl_infer_pelee.txt to the list, you must add the following:

1 testvecs/config/infer/public/caffe/tidl_infer_pelee.txt

In this section, we will test the models imported in Importing models using the inference parameter files distributed with the installation.

Executing MobileNetV2 model for image classification

Add the following lines at the beginning of ti_dl/test/testvecs/config/config_list.txt:

1 testvecs/config/infer/public/tensorflow/tidl_infer_mobileNetv2.txt
0

Execute command PC_dsp_test_dl_algo.out from ti_dl/test directory.

user@ubuntu-pc$ cd ${TIDL_INSTALL_PATH}/ti_dl/test
user@ubuntu-pc$ ./PC_dsp_test_dl_algo.out
Processing config file #0 : testvecs/config/infer/public/tensorflow/tidl_infer_mobileNetv2.txt
----------------------- TIDL Process with REF_ONLY FLOW ------------------------
# 0 . .. T 655.09 ... A : 896, 1.0000, 1.0000, 896 .... .....
user@ubuntu-pc$

Decoding the output

The test uses the ti_dl/test/testvecs/input/airshow.bmp as input image and 896 as test label.

airshow.bmp
Input image for classification

Note
This information is obtained from the input configuration file ti_dl/test/testvecs/config/classification_list_1.txt.
The input configuration file is provided as inData parameter in ti_dl/test/testvecs/config/infer/public/tensorflow/tidl_infer_mobileNetv2.txt

The numbers printed by the classification test represent the following information:

  • Time taken by PC simulation
  • Classification Results
    • Test image label
    • TOP-1 accuracy
    • TOP-5 accuracy
    • Inferred label

For example, the above output indicates that the simulation took 655.09 milliseconds, the test input label was 896, the inferred label was 896, and the TOP-1 and TOP-5 accuracies are both 1.0.

Executing PeleeNet model for object detection

Add the following lines at the beginning of ti_dl/test/testvecs/config/config_list.txt:

1 testvecs/config/infer/public/caffe/tidl_infer_pelee.txt
0

Execute command PC_dsp_test_dl_algo.out from ti_dl/test directory.

user@ubuntu-pc$ cd ${TIDL_INSTALL_PATH}/ti_dl/test
user@ubuntu-pc$ ./PC_dsp_test_dl_algo.out
Processing config file #0 : testvecs/config/infer/public/caffe/tidl_infer_pelee.txt
----------------------- TIDL Process with REF_ONLY FLOW ------------------------
# 0 . .. T 8213.45 ... .... .....
user@ubuntu-pc$

Decoding the output

The test uses the ti_dl/test/testvecs/input/ti_lindau_000020.bmp as input image for object detection.

The test prints the time taken by PC simulation and stores the list of detected objects and the coordinates for each detected object in ti_dl/test/testvecs/output/pelee.bin_ti_lindau_000020.bmp_000000.txt. It also generates a post-processed output image at ti_dl/test/testvecs/output/pelee.bin_ti_lindau_000020.bmp_000000_tidl_post_proc2.png that shows the detected objects and the bounding boxes.

Input output
in_ti_lindau_000020.bmp
Input image to object detection application
out_ti_lindau_000020.png
output of object detection application

Executing JSegNet21V2 model for semantic segmentation

Add the following lines at the beginning of ti_dl/test/testvecs/config/config_list.txt:

1 testvecs/config/infer/public/caffe/tidl_infer_jSegNet.txt
0

Execute command PC_dsp_test_dl_algo.out from ti_dl/test directory.

user@ubuntu-pc$ cd ${TIDL_INSTALL_PATH}/ti_dl/test
user@ubuntu-pc$ ./PC_dsp_test_dl_algo.out
Processing config file #0 : testvecs/config/infer/public/caffe/tidl_infer_jSegNet.txt
----------------------- TIDL Process with REF_ONLY FLOW ------------------------
# 0 . .. T 5236.66 ... .... .....
user@ubuntu-pc$

The test uses the ti_dl/test/testvecs/input/ti_lindau_I00000.bmp as input image for semantic segmentation.

The test prints the time taken by PC simulation and stores the generates a post-processed output image at ti_dl/test/testvecs/output/jsegNet1024x512.bin_ti_lindau_I00000.bmp_000000_tidl_post_proc3.png that shows the generated segmentation masks for the detected objects.

Input output
in_ti_lindau_I00000.bmp
Input image to semantic segmentation application
out_ti_lindau_I00000.png
output of semantic segmentation application

Note
You can also execute all 3 of the above tests with a single command of PC_dsp_test_dl_algo.out by adding the following lines in the beginning of ti_dl/test/testvecs/config/config_list.txt:

1 testvecs/config/infer/public/tensorflow/tidl_infer_mobileNetv2.txt
1 testvecs/config/infer/public/caffe/tidl_infer_pelee.txt
1 testvecs/config/infer/public/caffe/tidl_infer_jSegNet.txt
0

Executing imported models on Development Board

The .bin files generated by tidlModelImport can be tested on development board. This section describes the steps required to execute the imported models on Jacinto7 SoC based development boards using the TI_DEVICE_a72_test_dl_algo_host_rt.out binary (distributed with the installation). We will use the same three models imported in previous section of Importing models.

H/W requirements

  • TI Jacinto7 EVM
    • The EVM should be programmed to SD-boot mode as described in SDK user guide

Preparing the SD card

  • Run the below commands in you linux machine to copy the imported models, input files and binary files required to the TIDL application on target
user@ubuntu-pc$ cd ${PSDKRA_PATH}/vision_apps
user@ubuntu-pc$ make linux_fs_install_sd

Booting up the EVM

Insert the SD card in to the EVM and power on the EVM and wait for Linux to complete the boot. Log in as root and un below to execute the TIDL application

root@ j7-evm:~# cd /opt/tidl_test root@ j7-evm:/opt/tidl_test# export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib root@ j7-evm:/opt/tidl_test# ./TI_DEVICE_a72_test_dl_algo_host_rt.out

Decoding the output

Processing config file #0 : testvecs/config/infer/public/caffe/tidl_infer_jSegNet.txt
Syncd
----------------------- TIDL Process with TARGET DATA FLOW ------------------------
# 0 . .. TSC Mega Cycles = 8.58 ... .... .....
Processing config file #0 : testvecs/config/infer/public/caffe/tidl_infer_pelee.txt
----------------------- TIDL Process with TARGET DATA FLOW ------------------------
# 0 . .. TSC Mega Cycles = 14.04 ... .... .....
Processing config file #0 : testvecs/config/infer/public/tensorflow/tidl_infer_mobileNetv2.txt
----------------------- TIDL Process with TARGET DATA FLOW ------------------------
# 0 . .. TSC Mega Cycles = 6.33 ... A : 896, 1.0000, 1.0000, 896 .... .....

The test output is printed on console which shows the number of megacycles taken for each test case. Assuming C7x is running at 1 GHz, the time-taken-per-frame and FPS for each test can be calculated as:

Time-taken-per-frame in milliseconds = (1000 / C7x CPU clock in MHz) x Number of mega cycles
FPS = 1 / Time-taken-per-frame in milliseconds

For example, From the output above:

Test Mega cycles count Time taken per frame (ms) FPS
JSegNet21V2 8.58 8.58 116.55
PeleeNet 10.04 10.04 96.15
MobileNetV2 6.33 6.33 157.98

For image classification tests, the input class and the inferred class is also printed (e.g. in MobileNetV2 test).

After all the tests are complete, the post processed images for object detection and semantic segmentation are stored in testvecs/output.

Validating test output

Take the SD card out of the EVM and plug it into PC. After the SD card is mounted in ${SDCARD_MOUNT_DIR}, you can check the contents in ${SDCARD_MOUNT_DIR}/opt/tidl_test/testvecs/output.

The post processed output files should be present in

  • Object detection output in ${SDCARD_MOUNT_DIR}/opt/tidl_test/testvecs/output/pelee.bin_ti_lindau_000020.bmp_000000_tidl_post_proc2.bmp
  • Semantic segmentation output in ${SDCARD_MOUNT_DIR}/opt/tidl_test/testvecs/output/jsegNet1024x512.bin_ti_lindau_I00000.bmp_000000_tidl_post_proc3.bmp

Summary

This document provides a step-by-step approach towards getting started with TIDL-RT.

  1. You can refer to the TIDL-RT section here
    1. The list of all supported layers and their configurations can be found here
    2. The list of all CNN models validated by TI can be found here
      • If you choose your back bone network as one of the network that TI has already validated, then we can expect a rapid deployment of the model
  2. Import your trained network to TIDL-RT .bin files using tidlModelImport tool on PC.
    • You can develop / train your deep learning application / network on PC using open frameworks (Tensorflow, Caffe or Pytorch)
    • Further details of tidlModelImport can be found here
  3. Validate the inference result from import process on host emulation, and make sure this produces expected output. Enable the post-processing (if applicable) to validate the output quickly.
    • The details of post-processing can be found here
    • Further details of TIDL-RT sample application can be found here
  4. If you want to validate your model in SDK, please refer to SDK's demo / docs.
    • SDK also supports host emulation and we recommend this host emulation mode for debugging integration issues if any.
  5. Execute your network on Jacinto7 EVM to measure the performance using TIDL-RT target test application
  6. You can refer to the troubleshooting guide here for debugging issues.