TI Deep Learning Product User Guide
TIDL-RT Inference with Sample Application


This section provides steps to run imported model using TIDL-RT inference library and sample test application. Please refer TIDL Model Import section for details on importing pre-trained models. User shall try this step only after validating the output of import step against expected results.

The Sample Application binaries available in "ti_dl/test/" can be use to run inference on imported models.

We have sample allocation built for Host PC (Emulation) and Target (J7ES EVM). The Host PC (Emulation) and Target shall produce same results.

User shall first run the The host emulation on PC (Windows or Linux) by following the steps mentioned here. Output can be validated against the expected results. After validating with host emulation mode, the inference for model can be done on EVM and output can be compared against output obtained from host emulation.

Steps to run host emulation mode on PC

cd ti_dl/test
< Sample TIDL Binary >

This binary by default would read the configuration list file "ti_dl/test/testvecs/config/config_list.txt" and run all the configuration files available in the list file.

It is possible to run a different configuration list by running it as:

PC_dsp_test_dl_algo.out relative/path/to/some/other/config/list/txt

It is also possible to run a single inference configuration file by running it as:

PC_dsp_test_dl_algo.out s:relative/path/to/single/infer/config/txt

Inference Configuration Parameters

Parameter Description
netBinFile Input TIDL model with Net and Parameters - generated by Import tools
ioConfigFile Input and output buffer descriptor file for TIDL ivision interface - generated by Import tools
inData File name for reading the Input tensors. inData is further interpreted based on inFileFormat as follows:
inFileFormat = 0 : inData is directly used for providing compressed image (JPEG/PNG/BMP). This is only supported for single input networks.
inFileFormat = 1 : inData is read as a raw binary file. Multiple inputs should be concatenated into a single binary file. Multiple frames should also be concatenated in a single binary file.
inFileFormat = 2 : inData is read as a text file which contains list of inputs, with each row corresponding to all the inputs to the network separated by space.
Note: Only BMP image format is supported for execution on EVM (target).
outData Output tensors File for Writing
inFileFormat 0: inData is read as Compressed Image (JPEG/PNG/BMP)
1: inData is read as a raw binary file.
2: inData is read from a text file containing the list of compressed images
When inFileformat = 1 then type of raw data can be defined using rawDataInElementType. If rawDataInElementType is not set then raw data type is assumed to be same as inElementType
rawDataInElementType This parameter is only applicable when inFileFormat is 1 and indicates the raw file's format. TIDL-RT test bench performs a conversion from rawDataInElementType to inElementType when these 2 are not same.
Supported values are:
0 : unsigned char
1 : signed char
2 : unsigned short
3 : signed short
6 : float
numFrames Number of input tensors to be processed from the input file
postProcType Post processing on output tensor. 0 : Disable, 1- Classification top 1 and 5 accuracy, 2 – Draw bounding box for OD, 3 - Pixel level color blending
postProcDataId Output Tensor ID on which the Post processing needs to be performed, 0 : Default
quantRangeExpansionFactor Margin that needs to be applied on Feature map Range . Example 1.2 would apply add 20% margin to range values
quantRangeUpdateFactor Rate at which the range values shall be updated after each process. Example 0.1 would apply add 10% to current process range and 90% running range. This parameter is only applicable for stats collection during import and is not used during inference.
debugTraceLevel Control for enabling trace of inference to be used for debugging purpose, supports 3 levels: 0,1 and 2 with increasing amount of tracing. Default 0
writeTraceLevel Layer level tensor trace level. Default : 0, 0 - No Tensor Trace, 1- Fixed Point , 2- Padded Fixed Point, 3 - Floating point
writeOutput Write output tensors to file. Default : 1, 1 - All output tensors written to file in integer format, 2 - All output tensors written to file in float format, 0 - Each output tensor is compared against the file pointing to outData
numItrPerf Number of iterations to be averaged for reporting Performance - Default : 1

It is possible to override a parameter in all the inference configuration files present in "ti_dl/test/testvecs/config/config_list.txt" by
providing the parameter as an argument to the application. For example, to run all inferences with "writeTraceLevel" = 1,
the application can be run as

PC_dsp_test_dl_algo.out –writeTraceLevel 3

This feature is available when using a configuration file other then the default "ti_dl/test/testvecs/config/config_list.txt":

PC_dsp_test_dl_algo.out relative/path/to/some/other/config/list/txt –writeTraceLevel 3

The global override can then be further tuned for a particular inference by editing its entry in the configuration list file.
For example to run jacintoNet11v2 with "writeTraceLevel" = 1 (after the global override of "writeTraceLevel" = 3), the entry can be:

1 testvecs/config/infer/public/caffe/tidl_infer_jacintonet11v2.txt –writrTraceLevel 1

It is possible to override multiple parameters (both global overrides and test case specific overrides), for example:

PC_dsp_test_dl_algo.out –writeTraceLevel 3 –debugTraceLevel 3

Configuration files used for validation of models are provided in the "ti_dl/test/testvecs/config/infer/" folder for reference

Below is one example configuration file for Inference

inFileFormat = 2
postProcType = 1
numFrames = 1
netBinFile = "../testvecs/config/tidl_models/tidl_net_jacintonet11v2_np2quant.bin"
ioConfigFile = "../testvecs/config/tidl_models/tidl_io_jacintonet11v2_np2quant_1.txt"
inData = "../testvecs/config/classification_list.txt"
outData = "../testvecs/output/airshow_j11.bin"

On Successful execution the output tensor will be generated in as binary raw file in the path specified by "outData".

Steps to run Sample Application on EVM

The above inference test can be executed on Jacinto7 SoC using the TI_DEVICE_a72_test_dl_algo_host_rt.out binary (distributed with the installation). This section will describe the steps required to run the imported models on target.

H/W requirements

  • TI Jacinto7 EVM
    • The EVM should be programmed to SD-boot mode as described in SDK user guide

Preparing the SD card

  • Run the below commands in you linux machine to copy the imported models, input files and binary files required to the TIDL application on target
user\@ubuntu-pc\$ cd ${PSDKRA_PATH}/vision_apps
user\@ubuntu-pc\$ make linux_fs_install_sd

Booting up the EVM

Insert the SD card in to the EVM and power on the EVM and wait for Linux to complete the boot. Log in as root and run below to execute the TIDL application

root@ j7-evm:~# cd /opt/vision_apps
root@ j7-evm:~# source ./vision_apps_init.sh
root@ j7-evm:~# cd /opt/tidl_test
root@ j7-evm:/opt/tidl_test# export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib
root@ j7-evm:/opt/tidl_test# ./TI_DEVICE_a72_test_dl_algo_host_rt.out

Decoding the output

Processing config file #0 : testvecs/config/infer/public/caffe/tidl_infer_jSegNet.txt
----------------------- TIDL Process with TARGET DATA FLOW ------------------------
# 0 . .. TSC Mega Cycles = 8.58 ... .... .....
Processing config file #0 : testvecs/config/infer/public/caffe/tidl_infer_pelee.txt
----------------------- TIDL Process with TARGET DATA FLOW ------------------------
# 0 . .. TSC Mega Cycles = 9.77 ... .... .....
Processing config file #0 : testvecs/config/infer/public/tensorflow/tidl_infer_mobileNetv2.txt
----------------------- TIDL Process with TARGET DATA FLOW ------------------------
# 0 . .. TSC Mega Cycles = 6.33 ... A : 896, 1.0000, 1.0000, 896 .... .....

The test output is printed on console which shows the number of megacycles taken for each test case. Assuming C7x is running at 1 GHz, the time-taken-per-frame and FPS for each test can be calculated as:

Time-taken-per-frame in milliseconds = (1000 / C7x CPU clock in MHz) x Number of mega cycles
FPS = 1 / Time-taken-per-frame in milliseconds

For example, From the output above :

Test Mega cycles count Time taken per frame (ms) FPS
JSegNet21V2 8.58 8.58 116.55
PeleeNet 9.77 9.77 102.30
MobileNetV2 6.33 6.33 157.98

Note : Above numbers are representative numbers, to get the real numbers for these networks, you should run the same on EVM

For image classification tests, the input class and the inferred class is also printed (e.g. in MobileNetV2 test).

After all the tests are complete, the post processed images for object detection and semantic segmentation are stored in testvecs/output.

Validating test output

Take the SD card out of the EVM and plug it into PC. After the SD card is mounted in ${SDCARD_MOUNT_DIR}, you can check the contents in ${SDCARD_MOUNT_DIR}/opt/tidl_test/testvecs/output.

The post processed output files should be present in

  • Object detection output in ${SDCARD_MOUNT_DIR}/opt/tidl_test/testvecs/output/pelee.bin_ti_lindau_000020.bmp_000000_tidl_post_proc2.bmp
  • Semantic segmentation output in ${SDCARD_MOUNT_DIR}/opt/tidl_test/testvecs/output/jsegNet1024x512.bin_ti_lindau_I00000.bmp_000000_tidl_post_proc3.bmp