TI Deep Learning Library User Guide
Pre-trained CNN Models for TIDL

Both Import and inference configuration files for all the below models are part of TIDL software release package

Caffe Models

Num Network Architecture Source Comments
1 JacintoNet11v2 Link
2 SqueezeNet 1.1 Link
3 ResNet 10 Link
4 MobileNet-1.0 V1 Link
5 Resnet 50 V1 Proto Link
Model Link
Refer Note 1
6 ShuffleNet v1 Link
7 VGGNet 16 Link Refer Note 2
8 DenseNet121 Link
9 Resnext50-32x4d Link
10 JdetNet512x512 Link Refer Note 3
11 Pelee - Caffe SSD Link Refer Note 3
12 JsegNet21v2 link
13 ErfNet link

Tensorflow Models

Num Network Architecture Source Comments
1 MobileNet-1.0 V1 Frozen Graph Link
More models can be found mobilenet_v1.md
Optimize the graph for inference. Refer Note 4
2 InceptionNet v1 Checkpoint Link Generate Frozen Graph and Optimize it for inference. Refer Note 5
3 MobileNet-1.0 V2 Frozen Graph Link
More models can be found here
Optimize the graph for inference. Refer Note 4
4 Resnet 50 V1-TF Checkpoint Link Generate Frozen Graph and Optimize it for inference. Refer Note 5
5 Resnet 50 V2-TF Checkpoint Link Generate Frozen Graph and Optimize it for inference. Refer Note 5
6 ssd_mobilenet_v1_0.75 SSD Link Generate Frozen Graph and Optimize it for inference. Refer Note 6
7 ssd_mobilenet_v1 1.0 SSD Link Generate Frozen Graph and Optimize it for inference. Refer Note 6
8 ssd_mobilenet_v2 SSD Link Generate Frozen Graph and Optimize it for inference. Refer Note 6

ONNX Models

Num Network Architecture Source Comments
1 MobileNet-1.0 V2 Link
2 SqueezeNet 1.1 Link
3 Resnet 18 v1 Link
4 Resnet 18 v2 Link
5 ShuffleNet v1 Link
6 VGG 16 Link
7 Yolo V3 Link
8 Resnet 34 v1 Link
9 RegNetx-200mf Link Pytorch model from source is saved as onnx model
10 RegNetx-400mf Link Pytorch model from source is saved as onnx model
11 RegNetx-800mf Link Pytorch model from source is saved as onnx model

Tensorflow Lite Models

Num Network Architecture Source Comments
1 MobileNet-1.0 V1 Link
2 MobileNet-1.0 V2 Link
3 InceptionNet v1 Link
4 InceptionNet V3 Link
5 Efficientnet-Lite 0 Link
6 deeplabv3_mnv2 Link
7 deeplabv3_mnv2_dm05 Link
8 mobileNetv1_ssd Link
9 mobileNetv2_ssd Link
10 Efficientnet-Lite 0 Link
11 Efficientnet-Lite 4 Link

Notes

1. Download and Convert the "ResNet_mean.binaryproto" to simple raw float file Modify Below layer in ResNet-50-deploy.prototxt, replace kernel_size: 7 with global_pooling: true

layer {
  bottom: "res5c"
  top: "pool5"
  name: "pool5"
  type: "Pooling"
  pooling_param { 
  kernel_size: 7
  stride: 1
  pool: AVE
  }
}

2.

  • Use below commands to Upgrade prototext and Model
    • $CAFFE_ROOT/build/tools/upgrade_net_proto_text deploy_old.prototx deploy.prototxt
    • $CAFFE_ROOT/build/tools/upgrade_net_proto_binary deploy_old.caffemodel deploy.caffemodel

3.

  • SSD based Object detection network trained on PASCAL VOC data set
  • Update "confidence_threshold: 0.01" to confidence_threshold: 0.4

4.

  • Optimize Frozen Graph for inference using below tensorflow tool
    python "tensorflow/python/tools/optimize_for_inference.py" --input=mobilenet_v1_1.0_224_frozen.pb --output=mobilenet_v1_1.0_224_final.pb --input_names=input --output_names="MobilenetV1/Predictions/Softmax"

5.

6.

  • Refer the "Some remarks on frozen inference graphs" Section in https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md for frozen inference graph migration for latest tensorflow version. Then follow step 4 for Optimizing Frozen Graph
  • We recommend TFLite format for models trained in Tensorflow. The below steps can be handled part of TFLiteConverter
  • Tools Version used for freezing TF SSD model inference graph
    • python 3.6.7
    • tensorflow 1.12.0
    • tensorflow/models repo commit id : 62ce5d2a4c39f8e3add4fae70cb0d19d195265c6
    • Check point version used : ssd_mobilenet_v2_coco_2018_03_29
    • Use the default import configuration files available in the release package for importing the frozen models to TIDL after the below two steps
      • Update "inputNetFile = " in import config file if the model file path is not matching with default path.

Comment or remove the below line in the pipeline.config file if any error observed while export_inference_graph step.

batch_norm_trainable: true

Commands used are :

python D:/work/vision/CNN/tensorFlow/models/research/object_detection/export_inference_graph.py --input_type=image_tensor --pipeline_config_path=pipeline.config --trained_checkpoint_prefix=model.ckpt --output_directory=./tf_1.12.0
python "C:/conda/conda/envs/tf1.12.0/Lib/site-packages/tensorflow/python/tools/optimize_for_inference.py" --input=./tf_1.12.0/frozen_inference_graph.pb --output=./tf_1.12.0/frozen_inference_graph_opt_1.pb --input_names=Preprocessor/sub --output_name="concat,concat_1"
  • Performance of TF Deeplabv3 model The Deeplab v3 models with non-power of two scaling is not fully optimized. For example a resize layer with feature vector 33x33 as input and 513x513 as output would be un optimal. We recommend to update the network to use up-sample layer with power of two scale (Like, 2,4,8,32 etc)