TI Deep Learning Library User Guide
Execution Details

The compiler is released with TI Deep learning library package. Mainly it contains

  • Executable
    • ti_cnnperfsim.out.exe for windows
    • ti_cnnperfsim.out for Linux
  • Set of configuration files in config directory

This above mentioned executable of this compiler is called internally while importing the network model. It is recommended not to use this compiler directly and let it be used by import tool. Executables of import tool are available as part of TI Deep Learning Library package at below path

  • tidlModelImport\out\tidl_model_import.out.exe for windows
  • tidlModelImport\out\tidl_model_import.out for Linux

Below command can be used to invole the network compiler on a pre imported model,

ti_cnnperfsim.out.exe <path of configuration file>
example: ti_cnnperfsim.out.exe config/public/caffe/jacintoNet11v2.cfg

A successful execution of utility will create a directory mentioned in the configuration file as OUTPUT_DIR and the directory will have multiple *.csv file for each unique configuration. The name format of generated CSV file is as below networkName_Analysis_DSP**NUMCORES**_FRQ**FREQ**_DATA**datatype**_MSMCS**MSMCSIZE_KB**DDRE**DDR_EFFICIENCY**_EMIFP**NUMEMIF_PORTS**_DDRMTS**DDRFREQ_MHZ**__**INWIDTH**x**INHEGIHT** Example: jacintoNet11v2_Analysis_DSP1_FRQ1000_DATA0_MSMCS8192_DDRE0.60_EMIFP1_DDRMTS4266_1024x512 An example result file as shown as below

tidl_gc_results_csv.png
Output Results

The file contains the layer wise processing time along with the network parameters of the layer. At the end it provides a summary of total execution time, DDR bandwidth and MSMC SRAM (onchip memory) bandwidth.