1.2. Preparing Your Model

TI TVM compiles models in ONNX format. If your model is in a different framework, export it to ONNX first.

If you do not have a model yet, the TI Edge AI Model Zoo provides models that have been validated and optimized for inference on TI SoCs.

Before compiling with TVM, make sure you know:

  • Input name and shape — required by the TVM compiler (e.g. "input" with shape [1, 3, 224, 224])

  • Expected outputs — needed to verify inference results on the EVM match the original model

1.2.1. Verify the model before compilation

It is strongly recommended to run inference with your model on the host using onnxruntime before compiling with TVM. This confirms the model is well-formed and establishes a reference output to compare against after deployment.

import numpy as np
import onnxruntime as rt

sess = rt.InferenceSession("model.onnx")
input_name = sess.get_inputs()[0].name
input_shape = sess.get_inputs()[0].shape   # e.g. [1, 3, 224, 224]

# Replace with real pre-processed input data
input_data = np.random.rand(*input_shape).astype(np.float32)
output = sess.run(None, {input_name: input_data})
print(output)

Save the output — you will use it to validate inference results after deploying the compiled model on the EVM.