# Overview
This Quick Start Guide will cover how to get started quickly with **Model Composer** to train, optimize and compile Edge AI models for your supported TI development board.
[[y NOTE
This document is currently under construction.
]]
# Requirements
Hardware:
* Supported TI development board:
* [SK-TDA4VM](https://www.ti.com/tool/SK-TDA4VM)
* [SK-AM62A-LP](https://www.ti.com/tool/SK-AM62A-LP)
* [SK-AM68A](https://www.ti.com/tool/SK-AM68)
* HD (720p) / Full HD (1080p) USB Camera such as the [Logitech C270 HD WEBCAM](https://www.logitech.com/en-us/products/webcams/c270-hd-webcam.960-000694.html) or [Logitech C920 PRO HD WEBCAM](https://www.logitech.com/en-us/products/webcams/c920s-pro-hd-webcam.960-001257.html)
* Ethernet connectivity to the same local network as the computer running **Model Composer** via web browser
Software:
* Linux SDK for your development board
* [SK-TDA4VM SDK](https://www.ti.com/tool/PROCESSOR-SDK-J721E)
* [SK-AM62A-LP SDK](https://www.ti.com/tool/PROCESSOR-SDK-AM62A)
* [SK-AM68A SDK](https://www.ti.com/tool/PROCESSOR-SDK-AM68A)
* If using a SK-TDA4VM or SK-AM68A target board:
* [CP210x USB to UART Bridge VCP Drivers](https://www.silabs.com/developers/usb-to-uart-bridge-vcp-drivers?tab=downloads). Select the one that is applicable for your system
* [Supported web browser](https://dev.ti.com/faq/)
# Environment Setup
## Preparing the SD card image
Download the SDK binary and flash an SD card as explained in the **Preparing SD card image** section in the SDK documentation
* [SK-TDA4VM - Preparing SD card image](https://software-dl.ti.com/jacinto7/esd//processor-sdk-linux-edgeai/TDA4VM/08_06_00/exports/docs/devices/TDA4VM/linux/getting_started.html#preparing-sd-card-image)
* [SK-AM62A-LP - Preparing SD card image](https://software-dl.ti.com/jacinto7/esd/processor-sdk-linux-edgeai/AM62AX/08_06_00/exports/docs/devices/AM62AX/linux/getting_started.html#preparing-sd-card-image)
* [SK-AM68A - Preparing SD card image](https://software-dl.ti.com/jacinto7/esd/processor-sdk-linux-edgeai/AM68A/08_06_00/exports/docs/devices/AM68A/linux/getting_started.html#software-setup)
## Development board hardware setup
Refer to the **hardware setup** section in the SDK documentation for your development board regarding the board layout and hardware specifics:
* [SK-TDA4VM - Hardware Setup](https://software-dl.ti.com/jacinto7/esd//processor-sdk-linux-edgeai/TDA4VM/08_06_00/exports/docs/devices/TDA4VM/linux/getting_started.html#hardware-setup)
* [SK-AM62A-LP - Hardware Setup](https://software-dl.ti.com/jacinto7/esd/processor-sdk-linux-edgeai/AM62AX/08_06_00/exports/docs/devices/AM62AX/linux/getting_started.html#hardware-setup)
* [SK-AM68A - Hardware Setup](https://software-dl.ti.com/jacinto7/esd/processor-sdk-linux-edgeai/AM68A/08_06_00/exports/docs/devices/AM68A/linux/getting_started.html#hardware-setup)
1. Insert the prepared SD card in to the SD card slot on the development board
2. Have the development board connected to the same local area network (via ethernet or WiFI) as the computer where you are running **Model Composer**
3. Connect the computer to the UART port on the development board using a standard micro-USB cable - this is required to detect the IP address of the development board
4. Connect a USB camera to an available USB Type-A port on the development board
5. Connect a supported USB Type-C power adapter to the USB Type-C power connector on the development board
## Model Composer Setup
1. Browse to https://dev.ti.com/modelcomposer/ in a recommended web browser
2. Log in using your myTI.com account
3. **Model Composer** main page should appear:
![Model Composer](.\images\mc-qsg-01.png)
4. On the top bar of the GUI, click on **Options → Serial Port Settings**:
![Model Composer](.\images\mc-qsg-02.png)
5. If TI Cloud Agent is not installed on your system, a prompt will appear with instructions on how to do so:
![Model Composer](.\images\mc-qsg-03.png)
Please follow all the instructions in the prompt, **RELOAD** the page, and reopen the **Serial Port Settings**
6. **Model Composer** should automatically detect the appropriate serial port and baud rate to use. The **Port** and **Baud Rate** settings can be changed. However, it is recommended to use the default detected values:
![Model Composer](.\images\mc-qsg-04.png)
Once the serial port settings have been confirmed, press **CANCEL** to exit.
[[y Troubleshooting:
If no ports are detected, please check the usb connection between the computer to the UART port on the development port. If a SK-TDA4VM or SK-AM68A is being used, ensure that the **CP210x USB to UART Bridge VCP Drivers** are properly installed and check that the ports properly detected by the system. On Windows, it should look like:
![Model Composer](.\images\mc-qsg-04a.png)
]]
# Creating a New Project
Under **Start**, there are three buttons:
* **Example Project**: Create a new project using existing sample dataset
* **New Project**: Create a new project without using any existing sample dataset
* **Import Project**: Create a new project by importing a previously exported project
## Import Project
To import a project, you must select a previously exported Model Composer project.
To export a project, create a new project or open an existing project from the **Recent** list. Once the project opens, press the "Export Project" button to save the project to an archive file on your local PC.
![Model Composer](.\images\mc-qsg-36.png)
# Image Classification
Classify images into known objects
1. Select **New Project** and specify **Image Classification** for the **Task Type** and provide a custom name for the **Project Name**:
![Model Composer](.\images\mc-qsg-05.png)
## Image Classification - Capture
2. Once the project is created, **Model Composer** will move to the **Capture** stage. Press the **Input Source** button to specify the source of your images to upload
* **PC Camera**: Connect to the camera of the local PC running **Model Composer** to take pictures of images to import
* **Device Camera**: Connect to the camera of the development board to take pictures of images to import
* **Import Images from Local PC**: Import existing JPG and PNG images on the local PC
* **Import Annotated Archive dataset**: Import existing archive of annotated image dataset
This guide will show steps on how to use the USB camera connected to the development board to take pictures of images to import via the **Device Camera** option:
![Model Composer](.\images\mc-qsg-06.png)
3. The **Select Device Camera** dialog will appear. Enter the IP Address of the development board in the **Enter IP Address** field. If the IP Address is not known, press the "magnifying glass" button to have **Model Composer** find the IP address via the serial port connect to the development board:
![Model Composer](.\images\mc-qsg-07a.png)
Once the IP Address has been detected, it will populate the **Enter IP Address** field with the detected IP Address:
![Model Composer](.\images\mc-qsg-07b.png)
Press the **Connect** button to have **Model Composer** connect to the USB camera
Once the connection is completed, the camera will be enabled and **Model Composer** will have a live display of what the camera is seeing:
![Model Composer](.\images\mc-qsg-08.png)
4. The capture button can then be used to take a snapshot of what the camera is seeing. When a capture is taken, the image will appear in panel to the right of the live stream:
![Model Composer](.\images\mc-qsg-09.png)
For this guide, a series of photos of various TI LaunchPads and boards were taken, in addition to other unrelated images.
Once all the images are captured, enable the **Select All** button to select all the images to import and then press the **Confirm** button. This will start the import process:
![Model Composer](.\images\mc-qsg-10.png)
Once the import is complete, we will see all the imported images in the **Images** list in the panel to the left:
![Model Composer](.\images\mc-qsg-11.png)
Now we are ready to move on to the annotate step to classify the imported images.
## Image Classification - Annotate
1. Select **Annotate** in the top part of the page.
Press the "add" button above the **Image Classification** section:
![Model Composer](.\images\mc-qsg-12.png)
This will open a dialog where you can create classification labels:
![Model Composer](.\images\mc-qsg-13.png)
2. Pressing the "add" button in the dialog above will open an additional dialog where you can enter a classification label name. In the example below, the label "SimpleLink LaunchPad" has been specified:
![Model Composer](.\images\mc-qsg-14.png)
3. Pressing the **ADD** button afterwards will create the new label. Repeat the step for additional labels. For this guide, just two labels were created: **SimpleLink LaunchPad** and **Other**
![Model Composer](.\images\mc-qsg-15.png)
4. Close the dialog and return to the main **Annotate** page. Note the newly created classification labels appear under **Image Classification** to the right of the page:
![Model Composer](.\images\mc-qsg-16a.png)
5. The next step is then to select each image in the **Images** list and select the correct classification. For the example shown in this guide, the goal is to classify images as either a [SimpleLink](https://www.ti.com/wireless-connectivity/overview.html) [LaunchPad](https://www.ti.com/design-resources/embedded-development/hardware-kits-boards.html?keyMatch=LAUNCHPAD) (**SimpleLink LaunchPad**) or not (**Other**). The collection of imported images have a mix of SimpleLink LaunchPads, other (non-SimpleLink) LaunchPads, TI EVMs and SK boards, other miscellaneous electronic hardware, and finally just some random objects.
Select each image in the image list and annotate it with the correct classification label. In the example below, I have classified the selected image as a **SimpleLink Launchpad**. Note how when an image has been classified, a small blue icon will appear in the right corner of image thumbnail in the images list indicating that it has been annotated:
![Model Composer](.\images\mc-qsg-17a.png)
6. Once all the images have been annotated, press the "save" button to save the annotated images to an archive file on your local PC:
![Model Composer](.\images\mc-qsg-17b.png)
We can move to the next step to select the device and model.
## Image Classification - Model Selection
1. Select **Model Selection** in the top part of the page.
There are several options on this page in regards to device and model selection.
2. Under **Device selection**, specify the device for the development board being used. For this getting started example, a SK-TDA4VM is being used. Hence the **Use Selected Device** option can be used with the **Device** specified as **TDA4VM**.
3. Under **Model selection**, specify the desired model. For novice users, it is recommended to use the default Model (**regnext_x_800mf**) specified for **Use Recommended**. This is for the best accuracy:
![Model Composer](.\images\mc-qsg-18.png)
More details on both the device and model selected can be seen in the bottom half of the page.
We can now move to the next step to train the model.
## Image Classification - Train
1. Select **Train** in the top part of the page.
Under **Training parameters**, there are several options:
* **Epochs**: A pass over the entire training dataset. The higher the value, the higher the accuracy of the trained model - at the cost of a longer training time.
* **Learning rate**: The step size used by the optimization algorithm at each iteration while moving towards the optimal solution
* **Batch size**: The number of inputs that are propagated through the neural network in one iteration. The higher the value, the higher the accuracy of the trained model - at a cost of a higher memory requirment
* **Weight decay**: A regularization technique that can improve stability and generalization of a machine learning algorithm
For novice users, it is recommend to use the default settings with the exception of the **Epochs** value. A high **Epochs** value can greatly increase the training time. For this getting started example, a value of **10** will be specified. This should provide decent accuracy without requiring too much time.
The **Active Model** and **Active Device** settings should be configured according to the selections made previously.
2. Press the **Start Training** button to start the training:
![Model Composer](.\images\mc-qsg-19.png)
During the training process, various status messages will start appearing under the **Training Log** section. A graph will also appear under **Training Performance** with information regarding the accuracy.
When the training is complete, the last message in the log will indicate if it was successful:
![Model Composer](.\images\mc-qsg-20.png)
We can now move to the next step of compilation.
## Image Classification - Compile
1. Select **Compile** in the top part of the page.
Under **Compilation parameters**, a preset can be chosen to choose a tradeoff between various accuracy and speed settings. The presets will automatically apply relevant fixed values for the below settings:
* **Calibration frames**: The process of improving the accuracy during fixed point quantization. The higher the value, the higher the accuracy - at the cost of a longer compile time
* **Calibration iterations**: The number of calibration iterations. The higher the value, the higher the accuracy - at the cost of a longer compile time
* **Tensor bits**: Bitdepth used to quantize the weights and activations in the neural network. The neural network inference happens at this bit precision
For novice users, it is recommend to use the **Default Preset** setting, which is balanced tradeoff for accuracy and speed.
The **Active Model** and **Active Device** settings should be configured according to the selections made previously.
2. Press the **Start Compiling** button to start the training:
![Model Composer](.\images\mc-qsg-21.png)
Depending on the selected preset used, the compilation can take a bit of time.
During the training process, various status messages will start appearing under the **Compile Log** section.
When the training is complete, the last message in the log will indicate if it was successful:
![Model Composer](.\images\mc-qsg-22.png)
Other information such as **Post Compilation Accuracy** data and **Compiled Model Prediction** for several of the imported images can also be seen.
We can now move to the next step of seeing the image classification in action during a live preview!
## Image Classification - Live preview
1. Select **Live preview** in the top part of the page
2. Press the **Device Setup** button to configure the development board
3. Press the **Start Live preview** button to download the model to the EVM and run the preview:
![Model Composer](.\images\mc-qsg-23.png)
The live preview will start and attempt to classify objects according to the classification labels specifed earlier. Various status messages will start appearing under the **Live preview log** section.
The below images show examples of the live preview classification for SimpleLink Launchpads as various boards are placed before the camera. The classification result is shown in the yellow text:
![Model Composer](.\images\mc-qsg-24a.png)
![Model Composer](.\images\mc-qsg-24b.png)
![Model Composer](.\images\mc-qsg-24c.png)
![Model Composer](.\images\mc-qsg-24d.png)
![Model Composer](.\images\mc-qsg-24e.png)
![Model Composer](.\images\mc-qsg-24f.png)
4. Press the **Stop Live preview** button to stop the live preview.
We can now move to the next step of deploying the model.
## Image Classification - Deploy
1. Select **Deploy** in the top part of the page.
2. (Optional) Press the **Device Setup** button to connect to the development board.
3. (Optional) Download the model.
There are three download buttons:
* **Download trained model to PC**: Download the trained model to the PC as an archive file
* **Download compiler model artifacts to PC**: Download the compiled model to the PC as an archive file
* **Download compiled model artifacts to development board**: Download the compiled model to the connected development board for running model inference in SDK. The development board must be connected to **Model Composer** for this option to be available
![Model Composer](.\images\mc-qsg-25.png)
Once the model has been downloaded, all steps are now complete!
## Image Classification - Summary
Return to the main page. There will now be additional details regarding the current project:
![Model Composer](.\images\mc-qsg-26.png)
# Object Detection
Identify multiple objects in the image and draw bounding boxes.
1. Select **New Project** and specify **Object Detection** for the **Task Type** and provide a custom name for the **Project Name**.
## Object Detection - Capture
1. Once the project is created, **Model Composer** will move to the **Capture** stage. Press the **Input Source** button, specify **Import Annotated Archive dataset** and select the annotated archived dataset that was previously downloaded locally. This will import all the images and associated annotation to the project.
## Object Detection - Annotate
The imported dataset will need to be re-annotated for **Object Detection**
1. Select **Annotate** in the top part of the page.
[[y NOTE
There is a known issue where all the images imported from the previously annotated dataset will initially appear in the images list with the blue "annotated" icon, even though they need to be re-annotated for object detection. For images not yet re-annotated, when first selecting the image, the blue icon will disappear until correctly re-annotated.
]]
2. Select an image in the images list and select the correct label under **Object Detection** in the right and then select the **create-box** option in the icon list to the left of the image:
![Model Composer](.\images\mc-qsg-28.png)
3. Draw a sqaure around the object in the image. A small popup dialog box will appear with the label name. Press the "check" button to apply the label to the box. Note how an entry will appear under **Regions** to the right of the image:
![Model Composer](.\images\mc-qsg-29.png)
[[y NOTE
If the small popup dialog box is not visible or only partially visible, try zooming the image out using the mouse scroll wheel to scroll down.
]]
4. Repeat this step for all the images in the images list.
We can move to the next step to select the device and model.
## Object Detection - Model Selection
1. Select **Model Selection** in the top part of the page.
2. Under **Device selection**, specify the device for the development board being used. For this getting started example, a SK-TDA4VM is being used. Hence the **Use Selected Device** option can be used with the **Device** specified as **TDA4VM**.
3. Under **Model selection**, specify the desired model. For novice users, it is recommended to use the default Model (**yolox_tiny_lite**) specified for **Use Recommended**. This is for the best accuracy.
More details on both the device and model selected can be seen in the bottom half of the page.
We can now move to the next step to train the model.
## Object Detection - Train
1. Select **Train** in the top part of the page.
For novice users, it is recommend to use the default settings with the exception of the **Epochs** value. A high **Epochs** value can greatly increase the training time. For this getting started example, a value of **5** will be specified. This should provide decent accuracy without requiring too much time.
The **Active Model** and **Active Device** settings should be configured according to the selections made previously.
2. Press the **Start Training** button to start the training:
During the training process, various status messages will start appearing under the **Training Log** section. A graph will also appear under **Training Performance** with information regarding the accuracy.
When the training is complete, the last message in the log will indicate if it was successful:
![Model Composer](.\images\mc-qsg-30.png)
We can now move to the next step of compilation.
## Object Detection - Compile
1. Select **Compile** in the top part of the page.
The **Default Preset** setting will be selected by default, which is balanced tradeoff for accuracy and speed. However, the compilation for object detection can be extremely time consuming. This getting started example will use **Best Speed Preset** instead to speed up the time required to compile.
The **Active Model** and **Active Device** settings should be configured according to the selections made previously.
2. Press the **Start Compiling** button to start the training:
Depending on the selected preset used, the compilation can take a bit of time.
During the training process, various status messages will start appearing under the **Compile Log** section.
When the training is complete, the last message in the log will indicate if it was successful:
![Model Composer](.\images\mc-qsg-31.png)
Other information such as **Post Compilation Accuracy** data and **Compiled Model Prediction** for several of the imported images can also be seen.
We can now move to the next step of seeing the object detection in action during a live preview!
## Object Detection - Live preview
1. Select **Live preview** in the top part of the page
2. Press the **Device Setup** button to configure the development board
3. Press the **Start Live preview** button to download the model to the EVM and run the preview.
The live preview will start and attempt to detect objects according to the labels specifed earlier. Various status messages will start appearing under the **Live preview log** section.
The below images show examples of the live preview object detection for SimpleLink Launchpads as various boards are placed before the camera:
![Model Composer](.\images\mc-qsg-32.png)
![Model Composer](.\images\mc-qsg-33.png)
![Model Composer](.\images\mc-qsg-34.png)
4. Press the **Stop Live preview** button to stop the live preview.
We can now move to the next step of deploying the model.
## Object Detection - Deploy
1. Select **Deploy** in the top part of the page.
2. (Optional) Press the **Device Setup** button to connect to the development board.
3. (Optional) Download the model.
Once the model has been downloaded, all steps are now complete!
## Object Detection - Summary
Return to the main page. There will now be additional details regarding the current project:
![Model Composer](.\images\mc-qsg-35.png)
# Known Issues
Please refer to [this link](https://sir.ext.ti.com/jira/issues/?jql=product%20%7E%20%22Edge%20AI%20Studio%22%20AND%20resolution%20is%20EMPTY%20ORDER%20BY%20updated%20ASC) for a dynamic query that lists all issues that are currently open for Edge AI Studio.
# References
* [Edge AI Academy](https://dev.ti.com/tirex/explore/node?node=A__AN7hqv4wA0hzx.vdB9lTEw__EDGEAI-ACADEMY__ZKnFr2N__LATEST)