1. Getting Started

This chapter walks you through the end-to-end workflow for deploying a deep learning model on a TI evaluation board (EVM) using TI TVM. By the end you will have a compiled model artifact running inference on the target device.

The workflow consists of four steps:

  1. Install — Set up TI TVM and the required toolchain on your host machine.

  2. Understand your model — Identify input/output names, shapes, and verify expected results before bringing TVM into the picture.

  3. Compile — Use the TI TVM compiler to partition the model between TIDL-accelerated and TVM-generated code, and produce a deployable artifact.

  4. Run inference — Load the compiled artifact on the EVM and run inference using the TVM runtime.

Follow the Quick Start to get something running end-to-end, then refer to Compiling Models and Running Inference for full reference documentation.