Overview
This Quick Start Guide will cover how to get started quickly with Model Composer to train, optimize and compile Edge AI models for your supported TI development board.
NOTE
This document is currently under construction.
Requirements
Hardware:
- Supported TI development board:
- HD (720p) / Full HD (1080p) USB Camera such as the Logitech C270 HD WEBCAM or Logitech C920 PRO HD WEBCAM
- Ethernet connectivity to the same local network as the computer running Model Composer via web browser
Software:
- Linux SDK for your development board
- If using a SK-TDA4VM or SK-AM68A target board:
- CP210x USB to UART Bridge VCP Drivers. Select the one that is applicable for your system
- Supported web browser
Environment Setup
Preparing the SD card image
Download the SDK binary and flash an SD card as explained in the Preparing SD card image section in the SDK documentation
- SK-TDA4VM - Preparing SD card image
- SK-AM62A-LP - Preparing SD card image
- SK-AM68A - Preparing SD card image
Development board hardware setup
Refer to the hardware setup section in the SDK documentation for your development board regarding the board layout and hardware specifics:
Insert the prepared SD card in to the SD card slot on the development board
Have the development board connected to the same local area network (via ethernet or WiFI) as the computer where you are running Model Composer
Connect the computer to the UART port on the development board using a standard micro-USB cable - this is required to detect the IP address of the development board
Connect a USB camera to an available USB Type-A port on the development board
Connect a supported USB Type-C power adapter to the USB Type-C power connector on the development board
Model Composer Setup
Browse to https://dev.ti.com/modelcomposer/ in a recommended web browser
Log in using your myTI.com account
Model Composer main page should appear:
On the top bar of the GUI, click on Options → Serial Port Settings:
If TI Cloud Agent is not installed on your system, a prompt will appear with instructions on how to do so:
Please follow all the instructions in the prompt, RELOAD the page, and reopen the Serial Port Settings
Model Composer should automatically detect the appropriate serial port and baud rate to use. The Port and Baud Rate settings can be changed. However, it is recommended to use the default detected values:
Once the serial port settings have been confirmed, press CANCEL to exit.
Troubleshooting:
If no ports are detected, please check the usb connection between the computer to the UART port on the development port. If a SK-TDA4VM or SK-AM68A is being used, ensure that the CP210x USB to UART Bridge VCP Drivers are properly installed and check that the ports properly detected by the system. On Windows, it should look like:
Creating a New Project
Under Start, there are three buttons:
- Example Project: Create a new project using existing sample dataset
- New Project: Create a new project without using any existing sample dataset
- Import Project: Create a new project by importing a previously exported project
Import Project
To import a project, you must select a previously exported Model Composer project.
To export a project, create a new project or open an existing project from the Recent list. Once the project opens, press the "Export Project" button to save the project to an archive file on your local PC.

Image Classification
Classify images into known objects
Select New Project and specify Image Classification for the Task Type and provide a custom name for the Project Name:
Image Classification - Capture
Once the project is created, Model Composer will move to the Capture stage. Press the Input Source button to specify the source of your images to upload
- PC Camera: Connect to the camera of the local PC running Model Composer to take pictures of images to import
- Device Camera: Connect to the camera of the development board to take pictures of images to import
- Import Images from Local PC: Import existing JPG and PNG images on the local PC
Import Annotated Archive dataset: Import existing archive of annotated image dataset
This guide will show steps on how to use the USB camera connected to the development board to take pictures of images to import via the Device Camera option:
The Select Device Camera dialog will appear. Enter the IP Address of the development board in the Enter IP Address field. If the IP Address is not known, press the "magnifying glass" button to have Model Composer find the IP address via the serial port connect to the development board:
Once the IP Address has been detected, it will populate the Enter IP Address field with the detected IP Address:
Press the Connect button to have Model Composer connect to the USB camera
Once the connection is completed, the camera will be enabled and Model Composer will have a live display of what the camera is seeing:
The capture button can then be used to take a snapshot of what the camera is seeing. When a capture is taken, the image will appear in panel to the right of the live stream:
For this guide, a series of photos of various TI LaunchPads and boards were taken, in addition to other unrelated images.
Once all the images are captured, enable the Select All button to select all the images to import and then press the Confirm button. This will start the import process:
Once the import is complete, we will see all the imported images in the Images list in the panel to the left:
Now we are ready to move on to the annotate step to classify the imported images.
Image Classification - Annotate
Select Annotate in the top part of the page.
Press the "add" button above the Image Classification section:
This will open a dialog where you can create classification labels:
Pressing the "add" button in the dialog above will open an additional dialog where you can enter a classification label name. In the example below, the label "SimpleLink LaunchPad" has been specified:
Pressing the ADD button afterwards will create the new label. Repeat the step for additional labels. For this guide, just two labels were created: SimpleLink LaunchPad and Other
Close the dialog and return to the main Annotate page. Note the newly created classification labels appear under Image Classification to the right of the page:
The next step is then to select each image in the Images list and select the correct classification. For the example shown in this guide, the goal is to classify images as either a SimpleLink LaunchPad (SimpleLink LaunchPad) or not (Other). The collection of imported images have a mix of SimpleLink LaunchPads, other (non-SimpleLink) LaunchPads, TI EVMs and SK boards, other miscellaneous electronic hardware, and finally just some random objects.
Select each image in the image list and annotate it with the correct classification label. In the example below, I have classified the selected image as a SimpleLink Launchpad. Note how when an image has been classified, a small blue icon will appear in the right corner of image thumbnail in the images list indicating that it has been annotated:
Once all the images have been annotated, press the "save" button to save the annotated images to an archive file on your local PC:
We can move to the next step to select the device and model.
Image Classification - Model Selection
Select Model Selection in the top part of the page.
There are several options on this page in regards to device and model selection.
Under Device selection, specify the device for the development board being used. For this getting started example, a SK-TDA4VM is being used. Hence the Use Selected Device option can be used with the Device specified as TDA4VM.
Under Model selection, specify the desired model. For novice users, it is recommended to use the default Model (regnext_x_800mf) specified for Use Recommended. This is for the best accuracy:
More details on both the device and model selected can be seen in the bottom half of the page.
We can now move to the next step to train the model.
Image Classification - Train
Select Train in the top part of the page.
Under Training parameters, there are several options:
- Epochs: A pass over the entire training dataset. The higher the value, the higher the accuracy of the trained model - at the cost of a longer training time.
- Learning rate: The step size used by the optimization algorithm at each iteration while moving towards the optimal solution
- Batch size: The number of inputs that are propagated through the neural network in one iteration. The higher the value, the higher the accuracy of the trained model - at a cost of a higher memory requirment
Weight decay: A regularization technique that can improve stability and generalization of a machine learning algorithm
For novice users, it is recommend to use the default settings with the exception of the Epochs value. A high Epochs value can greatly increase the training time. For this getting started example, a value of 10 will be specified. This should provide decent accuracy without requiring too much time.
The Active Model and Active Device settings should be configured according to the selections made previously.
Press the Start Training button to start the training:
During the training process, various status messages will start appearing under the Training Log section. A graph will also appear under Training Performance with information regarding the accuracy.
When the training is complete, the last message in the log will indicate if it was successful:
We can now move to the next step of compilation.
Image Classification - Compile
Select Compile in the top part of the page.
Under Compilation parameters, a preset can be chosen to choose a tradeoff between various accuracy and speed settings. The presets will automatically apply relevant fixed values for the below settings:
- Calibration frames: The process of improving the accuracy during fixed point quantization. The higher the value, the higher the accuracy - at the cost of a longer compile time
- Calibration iterations: The number of calibration iterations. The higher the value, the higher the accuracy - at the cost of a longer compile time
Tensor bits: Bitdepth used to quantize the weights and activations in the neural network. The neural network inference happens at this bit precision
For novice users, it is recommend to use the Default Preset setting, which is balanced tradeoff for accuracy and speed.
The Active Model and Active Device settings should be configured according to the selections made previously.
Press the Start Compiling button to start the training:
Depending on the selected preset used, the compilation can take a bit of time.
During the training process, various status messages will start appearing under the Compile Log section.
When the training is complete, the last message in the log will indicate if it was successful:
Other information such as Post Compilation Accuracy data and Compiled Model Prediction for several of the imported images can also be seen.
We can now move to the next step of seeing the image classification in action during a live preview!
Image Classification - Live preview
Select Live preview in the top part of the page
Press the Device Setup button to configure the development board
Press the Start Live preview button to download the model to the EVM and run the preview:
The live preview will start and attempt to classify objects according to the classification labels specifed earlier. Various status messages will start appearing under the Live preview log section.
The below images show examples of the live preview classification for SimpleLink Launchpads as various boards are placed before the camera. The classification result is shown in the yellow text:
Press the Stop Live preview button to stop the live preview.
We can now move to the next step of deploying the model.
Image Classification - Deploy
Select Deploy in the top part of the page.
(Optional) Press the Device Setup button to connect to the development board.
(Optional) Download the model.
There are three download buttons:
- Download trained model to PC: Download the trained model to the PC as an archive file
- Download compiler model artifacts to PC: Download the compiled model to the PC as an archive file
Download compiled model artifacts to development board: Download the compiled model to the connected development board for running model inference in SDK. The development board must be connected to Model Composer for this option to be available
Once the model has been downloaded, all steps are now complete!
Image Classification - Summary
Return to the main page. There will now be additional details regarding the current project:

Object Detection
Identify multiple objects in the image and draw bounding boxes.
- Select New Project and specify Object Detection for the Task Type and provide a custom name for the Project Name.
Object Detection - Capture
- Once the project is created, Model Composer will move to the Capture stage. Press the Input Source button, specify Import Annotated Archive dataset and select the annotated archived dataset that was previously downloaded locally. This will import all the images and associated annotation to the project.
Object Detection - Annotate
The imported dataset will need to be re-annotated for Object Detection
Select Annotate in the top part of the page.
NOTE
There is a known issue where all the images imported from the previously annotated dataset will initially appear in the images list with the blue "annotated" icon, even though they need to be re-annotated for object detection. For images not yet re-annotated, when first selecting the image, the blue icon will disappear until correctly re-annotated.
Select an image in the images list and select the correct label under Object Detection in the right and then select the create-box option in the icon list to the left of the image:
Draw a sqaure around the object in the image. A small popup dialog box will appear with the label name. Press the "check" button to apply the label to the box. Note how an entry will appear under Regions to the right of the image:
NOTE
If the small popup dialog box is not visible or only partially visible, try zooming the image out using the mouse scroll wheel to scroll down.
Repeat this step for all the images in the images list.
We can move to the next step to select the device and model.
Object Detection - Model Selection
Select Model Selection in the top part of the page.
Under Device selection, specify the device for the development board being used. For this getting started example, a SK-TDA4VM is being used. Hence the Use Selected Device option can be used with the Device specified as TDA4VM.
Under Model selection, specify the desired model. For novice users, it is recommended to use the default Model (yolox_tiny_lite) specified for Use Recommended. This is for the best accuracy.
More details on both the device and model selected can be seen in the bottom half of the page.
We can now move to the next step to train the model.
Object Detection - Train
Select Train in the top part of the page.
For novice users, it is recommend to use the default settings with the exception of the Epochs value. A high Epochs value can greatly increase the training time. For this getting started example, a value of 5 will be specified. This should provide decent accuracy without requiring too much time.
The Active Model and Active Device settings should be configured according to the selections made previously.
Press the Start Training button to start the training:
During the training process, various status messages will start appearing under the Training Log section. A graph will also appear under Training Performance with information regarding the accuracy.
When the training is complete, the last message in the log will indicate if it was successful:
We can now move to the next step of compilation.
Object Detection - Compile
Select Compile in the top part of the page.
The Default Preset setting will be selected by default, which is balanced tradeoff for accuracy and speed. However, the compilation for object detection can be extremely time consuming. This getting started example will use Best Speed Preset instead to speed up the time required to compile.
The Active Model and Active Device settings should be configured according to the selections made previously.
Press the Start Compiling button to start the training:
Depending on the selected preset used, the compilation can take a bit of time.
During the training process, various status messages will start appearing under the Compile Log section.
When the training is complete, the last message in the log will indicate if it was successful:
Other information such as Post Compilation Accuracy data and Compiled Model Prediction for several of the imported images can also be seen.
We can now move to the next step of seeing the object detection in action during a live preview!
Object Detection - Live preview
Select Live preview in the top part of the page
Press the Device Setup button to configure the development board
Press the Start Live preview button to download the model to the EVM and run the preview.
The live preview will start and attempt to detect objects according to the labels specifed earlier. Various status messages will start appearing under the Live preview log section.
The below images show examples of the live preview object detection for SimpleLink Launchpads as various boards are placed before the camera:
Press the Stop Live preview button to stop the live preview.
We can now move to the next step of deploying the model.
Object Detection - Deploy
Select Deploy in the top part of the page.
(Optional) Press the Device Setup button to connect to the development board.
(Optional) Download the model.
Once the model has been downloaded, all steps are now complete!
Object Detection - Summary
Return to the main page. There will now be additional details regarding the current project:

Known Issues
Please refer to this link for a dynamic query that lists all issues that are currently open for Edge AI Studio.