3.8.1. Neo-AI Deep Learning RuntimeΒΆ

Introduction

Neo-AI-DLR is an open source common runtime for deep learning models and decision tree models compiled by TVM, AWS SageMaker Neo, or Treelite. Processor SDK has integrated Neo-AI-DLR. DLR stands for Deep Learning Runtime. With this integration user has below option to compile models for Jacinto devices.

Examples

Examples of running inference with Neo-AI-DLR are available in /usr/share/dlr of the target filesystem:

cd /usr/share/dlr/tests/python/integration/
python3 load_and_run_tvm_model.py

Note

The Processor SDK RTOS also implements Heterogeneous Execution of CNN models on A72 and C7x-MMA using the TVM runtime and Neo-AI-DLR runtime. This heterogeneous execution enables

  • TVM/Neo-AI-DLR as the top level inference API for user applications
  • Offloading subgraphs to C7x/MMA for accelerated execution with TIDL
  • Generating code and running on the ARM A72 core for layers that are not supported by TIDL

Please refer to the section Open Source Runtime->TVM/Neo-AI-DLR + TIDL Heterogeneous Execution in TIDL user guide (SDK components) for detailed instruction on usage.