1. Getting Started¶
This chapter walks you through the end-to-end workflow for deploying a deep learning model on a TI evaluation board (EVM) using TI TVM. By the end you will have a compiled model artifact running inference on the target device.
The workflow consists of four steps:
Install — Set up TI TVM and the required toolchain on your host machine.
Understand your model — Identify input/output names, shapes, and verify expected results before bringing TVM into the picture.
Compile — Use the TI TVM compiler to partition the model between TIDL-accelerated and TVM-generated code, and produce a deployable artifact.
Run inference — Load the compiled artifact on the EVM and run inference using the TVM runtime.
Follow the Quick Start to get something running end-to-end, then refer to Compiling Models and Running Inference for full reference documentation.