<!-- Start of markdown source --> # Overview **CCStudio™ Edge AI Studio** is a fully integrated solution for collecting and annotating data, training and compiling models for deployment on a live development platform. It hosts a variety of example solutions complete with demonstration data sets to enable testing the toolchain without importing any of your own data. **Edge AI Studio** also supports Bring-Your-Own-Data (BYOD), enabling the re-training of models from the **TI Model Zoo** with custom data to improve accuracy and performance. It is available as a cloud-based application for vision-based tasks and as both a cloud and desktop application for real-time control tasks. With **Edge AI Studio for Time Series**, implement AI for real-time analysis of time series data. Solutions include arc-fault detection, motor bearing fault detection, and fan blower imbalance fault detection, enabling predictive maintenance with high accuracy on TI Microcontrollers. This Quick Start Guide will cover how to get started quickly with **Edge AI Studio for Time Series** to train, optimize and compile Edge AI models for your supported TI development board. <iframe width="854" height="480" src="https://www.youtube.com/embed/videoseries?si=HHaz-0uCya4K3bLy&amp;list=PL3NIKJ0FKtw6y6VMJCdpobmOaMCcoOxRN" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe> # Requirements Hardware: * Supported TI development board Software: * CCStudio™ Edge AI Studio for Time Series v1.6.0 or greater. * Supported SDK for your development board. * Supported compiler version. * [Code Composer Studio IDE v20.5.0 or greater](https://www.ti.com/tool/CCSTUDIO) [[y SDK and compiler dependencies If the required SDK and compiler versions are not found, **Edge AI Studio** will prompt the user to install the required versions when needed. ]] # Edge AI Studio for Time Series The main GUI **Edge AI Studio for Time Series** main GUI will appear as below: ![Main GUI](.\images\ts-qsg-01.png) A new project can be created with the **Create a New Project** button. Example projects can also be browsed under the **Example Project Library** tab and the **Import Project to Workspace** button used to import it to the workspace. Any recent projects will also appear in the **My Projects** tab. The right side of the GUI has a section for which describes various updates, documentation, and resources for further information on **Edge AI Studio**. # Creating a New Project Use the **Create a New Project** button near the top of the GUI to launch the **New Project** dialog. In the **New Project** dialog, follow the steps below to create a new project. 1. **Task Selection**: Select the task. There are three options: ![Task Selection](.\images\ts-qsg-newproj-01.png) * **Time Series Classification**: Classify time series data into known categories * **Time Series Regression**: Predict a continuous value based on time series data * **Time Series Forcasting**: Predict future values based on historical time series data Press **Next** to continue. 2. **Sensor Selection**: Select the sensor. A **Generic Sensor** or a **Specific Sensor** can be selected. ![Sensor Selection](.\images\ts-qsg-newproj-02.png) Press **Next** to continue. 3. **Data Format Definition**: The options here allow options for **Choose Data Format Method** and the **Format Preview**. ![Data Format Definition](.\images\ts-qsg-newproj-03.png) Press **Next** to continue. 4. **Review**: Review the selections made in the previous steps. If any changes are needed, the **Previous** button can be used to return to the previous steps and make changes. ![Review](.\images\ts-qsg-newproj-04.png) Press **Finish** to create the new project. Your new project will now be created, opened and ready to use. # Import Example Project Example projects are also available to import and can be used to explore the various features of Edge AI Studio without needing to capture and annotate your own data. The list of available example projects that can be imported are shown in the **Example Project Library** tab. ![Import](.\images\ts-qsg-import-01.png) The **Project Name**, associated **Task**, and list of **Supported Devices** are shown as sortable columns for each example project. It is possible to sort the list by any of the columns by clicking on the column header. This can be used to easily find example projects for a specific task or device. It is also possible to filter the list by using the text search field or by **Task**, **Device Family**, or **Device** using the dropdown filters. Select a project that corresponds to the program that has been flashed on the development board being used. Then press the **Import Project to Workspace** button to import the project to the workspace. Once imported, the project will be opened and ready to use. # Working with Projects [[b Imported project The project used for the images in this section is based on an imported example project **PIR Detection Series** that can be found in the **Example Project Library**. The project is designed for the MSPM0 LaunchPad and uses the PIR sensor to capture data for a simple motion detection use case. The project includes example captured data files and annotations, and that can be used to explore the various features of Edge AI Studio. ]] ## Getting Started Once a project has been opened, the main page for the project will appear with the **Getting Started** section page open by default: ![Getting Started](.\images\ts-qsg-project-01.png) This page has a description of the various views available in the project in addition to general and page specific help information: * **Project Files** — The left panel shows the data files associated with your project. Select files to view or annotate them, and use the **IMPORT** button to add new data files for training. * **Main Tabs** — The center panel contains the main workflow tabs: * **Capture** — Connect to a target device and capture live sensor data to use as training data. * **Annotation** — Label your captured data files to define the classes your model will learn to recognize. * **Training** — Configure model parameters and start a training run using your annotated data. * **Live Preview** — Run live inference on a connected device to test a trained and compiled model. * **Training Runs** — The bottom panel lists all training and compilation runs for this project. Start a new training run with the **+ New Run** button, monitor progress in real time, and compare accuracy across runs. Select a completed run to review its metrics or deploy it to a target device. * **Properties** — The right panel shows configurable properties for the currently active tab. Settings update automatically as you switch between tabs — configure capture parameters in **Capture**, manage annotation labels in **Annotation**, and set training hyperparameters in **Training**. Below the view/tab descriptions above are both **General** and **Page Specific Help** information. The **General** information has general information on **MCU Analytics Backend** for **Edge AI Studio**. The **Page Specific Help** section has project specific details and instructions on the tabs covered in the below sections. ![Getting Started](.\images\ts-qsg-project-02.png) The "Home" button can be used to close the project and return to the main page with the list of projects. ## Capture [[b Connected target and firmware The connected target used for the images in this section is an [MSPM0G5187 LaunchPad](https://www.ti.com/tool/LP-MSPM0G5187) with the **TIDA-010997 Edge AI Studio Sensors BoosterPack** that has the appropriate [firmware](https://dev.ti.com/tirex/explore/node?devtools=LP-MSPM0G5187&isTheia=false&node=A__AOW9RYQgDUv5n13kQPY0-g__MSPM0-SDK__a3PaaoK__LATEST) flashed for this project. ]] The **Capture** tab allows you to connect to a target device and capture live sensor data to use as training data. To capture data, the target device must have the appropriate firmware flashed that corresponds to the project being used and then connected to Edge AI Studio. Please refer to the **Page Specific Help** above, specifically the **Quick Start** section for **Capture** in the main **Getting Started** tab for more details regarding the steps for setting up your hardware (**Setup Your Hardware**). To connect the target, select the **Device** being used from the dropdown menu and then press **Connect To Device**. This will open another dialog that will display detected serial ports. Select the appropriate port for the connected device and press **Connect** to connect to the device. ![Connect](.\images\ts-qsg-project-capture-01.png) Another way to connect the device is to use the **Options > Serial Port Settings ...**: ![Connect](.\images\ts-qsg-project-capture-03.png) This will also allow you to select a port to connect to and also configure manually configure the baud rate: ![Connect](.\images\ts-qsg-project-capture-03a.png) Note that the baud rate must match the baud rate configured in the firmware flashed on the device for a successful connection. The default baud rate for TI example projects is typically 115200, but it can be changed in the firmware if desired. If the connection is successful, the status bar at the bottom left will indicate that the device is connected. ![Connect](.\images\ts-qsg-project-capture-02.png) Once connected, all the options in the **Capture Properties** panel on the right will become available to configure the capture parameters: ![Capture](.\images\ts-qsg-project-capture-04a.png) * **Sensor**: Drop-down list of sensors to capture data from. The available options will depend on the firmware flashed on the device and the sensors it has. * **Sampling Frequency**: The rate at which data samples are captured from the selected sensor. * **Samples** : The total number of samples to capture in a single capture session. This number may be rounded up depending on the **Sensor** used. * **Wait Time Between Collections (s)**: The amount of time in seconds to wait between capture sessions. * **Number of Collections**: The total number of capture sessions to perform. If set to 1, only a single capture session will be performed. If set to greater than 1, multiple capture sessions will be performed with the specified wait time between each session. * **Filename**: The name of the file to save the captured data to. If multiple capture sessions are performed, a number will be appended to the filename for each session. Once configured, press the **Start Capture** button to start capturing data from the connected device. The captured data will be saved to a file in the project and will also appear in the **Project Files** panel on the left, under **files**. The captured data can then be selected to view the data or annotated in the **Annotation** tab. Selecting the captured data file will also open the file in the **Annotation** tab by default. ![Capture](.\images\ts-qsg-project-capture-04.png) For more details on **Capture** tab, please refer to the **Page Specific Help** section on **Capture** in the main **Getting Started** tab. ## Annotation The **Annotation** tab allows you to label your captured data files to define the classes your model will learn to recognize. To annotate a data file, select the file from the **Project Files** panel on the left. This will open the file in the center panel and display the data in a visual format. ![Annotate](.\images\ts-qsg-project-annotate-01.png) The **Annotation Properties** panel on the right has the following options: * **Assign Classes**: Assign a class label to the selected data file using one of the existing options from the dropdown menu. The **Unclassify Selected File** button can be used to remove the assigned class from the selected file if needed. The list of available class labels can be modified using the **Add/Edit/Delete Classes** button. When a class label is assigned to the selected data file, the file will be moved from the **files** section to the **classes** section under the specified class label in the **Project Files** panel on the left. The file can still be selected to view the data, but it will now also have a class label associated with it that can be used for training a model. The **Unclassify Selected File** button can be used to remove the assigned class from the selected file if needed, which will move the file back to the **files** section in the **Project Files** panel. * **Assign Splits**: Assign a split label to the selected data file using one of the existing options from the dropdown menu. Note that if the data file was assigned a class label using the action above, then the file must be reopened from the **Project Files** panel on the left before applying the split label. The **Manual Split** slider can be used to toggle between **Manual** and **Auto** mode. In **Manual** mode, you can control how classified data files will be used during training. In **Auto** mode, data files will be automatically assigned between training/validation/testing. * **Data Visualization**: Specify how to visualize the data displatyed in the center panel with one of the below options: * **Time domain**: Visualize the data as a time series plot. This is typically used for data that has a time component, such as sensor data captured over time. * **Frequency domain**: Visualize the data as a frequency spectrum plot. This is typically used for data that has frequency components, such as audio data or vibration data. * **STFT spectrogram**: Visualize the data as a spectrogram using Short-Time Fourier Transform (STFT). This is typically used for data that has both time and frequency components, such as audio data or vibration data captured over time. * **Mean Frequency domain**: Visualize the data as a mean frequency plot. This is typically used for data that has frequency components, such as audio data or vibration data, and can provide a simplified view of the frequency content by averaging over time. Once all the data files have been annotated with class labels in the **Annotation** tab, the data is ready to be used for training a model in the **Training** tab. Select the **Training** tab to view the training options and start a training run with the annotated data. For more details on **Annotation** tab, please refer to the **Page Specific Help** section on **Annotation** in the main **Getting Started** tab. ## Training The **Training** tab allows you to configure model parameters and start a training run using your annotated data. To start a training run, first ensure that your data files have been annotated with class labels in the **Annotation** tab. Then select the **Training** tab to view the training options. ![Training](.\images\ts-qsg-project-training-01.png) The ** Training Runs** panel at the bottom lists all training and compilation runs for this project. Start a new training run with the **+ New Run button**. This will open the **New Training Run** dialog. 1. **Device Selection** Use the **Device Selection** dropdown menu to select the target device to train for. The **Device Information** section will display details on the selected device and associated SDK version. ![Device Selection](.\images\ts-qsg-project-training-new-01.png) Press **Next** to continue to the next step. 2. **Model Selection** Select the model architecture to use for training under **Model Selection**. If multiple model architectures are available for the selected device, then it can be selected using the **Use Selected Model** dropdown menu. A slider can also be used to toggle between the available model architectures if there are multiple options, allowing you to chose a trade-off between acccuracy and speed. The **Model Information** section will display details on the selected model architecture. ![Model Selection](.\images\ts-qsg-project-training-new-02.png) Press **Next** to continue to the next step. 3. **Parameter Selection** Specify the **Training Parameters** and **Compilation Parameters** to use for training the model. ![Parameter Selection](.\images\ts-qsg-project-training-new-03.png) For the **Training Parameters**, specify the **Preprocessing Parameters**, and the **Training Parameters** such as the number of epochs to train for, and the learning rate to use for the optimizer. For the **Compilation Parameters**, specify the **Compilation Parameters** to be used for compiling the trained model by selecting one of the options in the **Compilation Preset** dropdown menu. 4. **Review** Review the selections made in the previous steps. If any changes are needed, the **Previous** button can be used to return to the previous steps and make changes. A custom name can be provided for the training run in the **Run Name** field. ![Review](.\images\ts-qsg-project-training-new-04.png) Press **Finish** to start the training run with the specified parameters. Once the training run has been started, the training progress can be monitored in real time in the **Training Runs** panel at the bottom of the GUI. ![Training Run](.\images\ts-qsg-project-training-run-01.png) Once the training run is completed, training metrics and details on the trained and compiled model can be viewed. The completed training run will appear in the **Training Runs** panel with a status of **100% Completed**. The **Training Properties** panel on the right will also update to show details on the training run, such as the training parameters used, the training metrics, and the model details. ![Training Run](.\images\ts-qsg-project-training-run-02.png) Additional runs will appear in the **Training Runs** panel as they are started and completed, allowing for easy comparison of training metrics across runs with different parameters or model architectures. Select the desired run from the list to view the details for that run. ![Training Run](.\images\ts-qsg-project-training-run-03.png) The trained and compiled model can then be deployed to a target device and tested in real time in the **Live Preview** tab. For more details on **Training** tab, please refer to the **Page Specific Help** section on **Training** in the main **Getting Started** tab. ## Live Preview [[b Different project for live preview The firmware used for the images in this section differs from the one used for the **Capture** section. The [firmware used is specific for live preview](https://dev.ti.com/tirex/explore/node?devtools=LP-MSPM0G5187&isTheia=false&node=A__AGs634uOKB40BO-uUMqZPw__MSPM0-SDK__a3PaaoK__LATEST). ]] The **Live Preview** tab allows you to run live inference on a connected device to test a trained and compiled model. To use **Live Preview**, the target device must have the appropriate live preview firmware flashed that corresponds to the project being used and then connected to Edge AI Studio. Please refer to the **Page Specific Help** above, specifically the **Quick Start** section for **Live Preview** in the main **Getting Started** tab for more details regarding the steps for setting up your hardware (**Setup Your Hardware**) and connection (**Configure Connection**). Once the hardware setup and connection is complete, select an applicable successful training run from the **Training Runs** panel at the bottom. Then, under the **Live Preview Properties** panel on the right, select the applicable **Sensor** and then press the **Start Preview** button to start running live inference on the connected device with the trained and compiled model from the selected training run. ![Live Preview](.\images\ts-qsg-project-livep-02.png) Press **Stop Preview** to stop the live inference. For more details on **Live Preview** tab, please refer to the **Page Specific Help** section on **Live Preview** in the main **Getting Started** tab. # Known Issues Please refer to [this link](https://sir.ext.ti.com/jira/issues/?jql=product%20%7E%20%22Edge%20AI%20Studio%22%20AND%20resolution%20is%20EMPTY%20ORDER%20BY%20updated%20ASC) for a dynamic query that lists all issues that are currently open for **Edge AI Studio**. # References * [Edge AI Studio](https://www.ti.com/tool/EDGE-AI-STUDIO) * [Edge AI Studio overview videos (YouTube playlist)](https://www.youtube.com/playlist?list=PL3NIKJ0FKtw6y6VMJCdpobmOaMCcoOxRN) <!-- End of markdown source --> <div id="footer"></div>