5. Examples and Demos

Applications available by development platform

There are a number of Example Applications provided within the Processor SDK for Linux. Below are the applications available on each platform and the User’s Guides associated with each component.

Note

The example applications below assume that you are using the default pinmux/profile configuration that the board ships with, unless otherwise noted in the individual application’s User’s Guide


Applications AM335x EVM AM335x ICE AM335x SK BeagleBone Black AM437x EVM AM437x Starter Kit AM437x IDK AM572x EVM AM572x IDK AM571x IDK 66AK2Hx EVM & K2K EVM K2Ex EVM 66AK2L06 EVM K2G EVM OMAP-L138 LCDK Users Guide Description
Matrix GUI X X X X X X X X X X X X X X X Matrix User’s Guide Provides an overview and details of the graphical user interface (GUI) implementation of the application launcher provided in the Sitara Linux SDK
Power & Clocks X X X X X X X X X X X X X X X Sitara Power Management User Guide Provides details of power management features for all supported platforms.
Multimedia X   X X X X X X X X           Multimedia User’s Guide Provides details on implementing ARM/Neon based multimedia using GStreamer pipelines and FFMPEG open source codecs.
Accelerated Multimedia               X X X X X X X   Multimedia Training Provides details on hardware accelerated (IVAHD/VPE/DSP) multimedia processing using GStreamer pipelines.
Graphics X   X X X X X X X X           Graphics Getting Started Guide Provides details on hardware accelerated 3D graphics demos.
OpenCL               X X X X X X X   OpenCL Examples Provides OpenCL example descriptions. Matrix GUI provides two out of box OpenCL demos: Vector Addition and Floating Point Computation.
Camera         X X X X X X           Camera User’s Guide Provides details on how to support smart sensor camera sensor using the Media Controller Framework
Video Analytics               X X X           Video Analytics Demo Demonstrates the capability of AM57x for video analytics. It builds on Qt and utilizes various IP blocks on AM57x.
DLP 3D Scanner               X X X           3D Machine Vision Reference Design Demonstrates the capability of AM57x for DLP 3D scanning.
Simple People Tracking X   X X X X X X X X           3D TOF Reference Design Demonstrates the capability of people tracking and detection with TI?s ToF (Time-of-Flight) sensor
Barcode Reader X X X X X X X X X X X X X X   Barcode Reader Demonstrates the capability of detecting and decoding barcodes
USB Profiler X X X X X X X X X X X X X X X NA  
ARM Benchmarks X X X X X X X X X X X X X X X NA  
Display X   X   X X   X X X           NA  
WLAN and Bluetooth X   X   X X   X               WL127x WLAN and Bluetooth Demos Provides details on how to enable the WL1271 daughtercard which is connected to the EVM
QT Demos X   X X X X X X X X           Hands on with QT Provides out of box Qt5.4 demos from Matrix GUI, including Calculator, Web Browser, Deform (shows vector deformation in the shape of a lens), and Animated Tiles.
Web Browser X   X X X X X X X X           NA  
System Settings X X X X X X X X X X X X X X X NA  
EVSE Demo X   X X X X X X X X           HMI for EV charging infrastructure Provides out of box demo to showcase Human Machine Interface (HMI) for Electric Vehicle Supply Equipment(EVSE) Charging Stations.
Protection Relay Demo X   X X                       HMI for Protection Relay Demo Matrix UI provides out of box demo to showcase Human Machine Interface (HMI) for Protection Relays.
Qt5 Thermostat HMI Demo X   X X X X X X X X           Qt5 Thermostat HMI Demo Provides out of box Qt5-based HMI for Thermostat

5.1. Matrix User Guide

Important Note

This guide is for the latest version of Matrix that is included in Processor SDK Linux. If you are looking for information about the old Matrix then this can be found at the following link **Previous Version of Matrix*** *

Supported Platforms

This version of Matrix supports all Sitara devices, as well as K2H/K2K, K2E, and K2L platforms.

Initial Boot Up

When you first boot up a target system which has display device attached (e.g., AM335x, AM437x, and AM57x platforms), Matrix should be automatically started. Matrix can be either operated by touchscreen or mouse. Default startup for most SDK platforms is touchscreen. Should you encounter any problems below are some tips to get everything running smoothly. See **Matrix Startup Debug**

When you boot up a target system without display (e.g., K2H/K2K, K2E, and K2L platforms), Matrix will not be automatically started during booting, and only Remote_Matrix is available for use after the booting.

Overview

Matrix is an HTML 5 based application launcher created to highlight available applications and demos provided in new Software Development Kits. There are two forms of Matrix, local and remote Matrix. All of the example applications and demos are available using either the local or remote version. The local version launches by default when the target system is booted and uses the target system’s touchscreen interface for user input. Matrix comes as a 4x3 matrix of icons or as a 4x2 matrix depending on the display resolution.

../_images/Matrix_2_Main_Menu.png

Local and Remote Matrix

Local Matrix

Local Matrix refers to Matrix being displayed on a display device attached to the target system. The launcher for Matrix is just a simple QT application that displays a Webkit base browser that points to the URL http://localhost:80

NOTE Local matrix is not available for platforms without display support, nor the Keystone-2 platforms, such as K2H/K2K, K2E, K2L, and K2G platforms.

Remote Matrix

Remote Matrix refers to Matrix being ran in any modern day web browser not located on the target system.

The URL for Remote Matrix is http://<target system’s ip address>

You can find the target’s ip address by using local Matrix and clicking on the Settings icon and then on the Network Settings icon. Or using a terminal logged in to the target system enter the below command:

ifconfig

From the output displayed, look in the section that starts with eth0. You sould see an IP address right after “inet addr”. This is the IP address you should use for remote Matrix.

With Remote Matrix you can interact with Matrix on your PC, cellphone, tablet, or any device with a modern web browser. You can now launch text based applications or scripts and have the output streamed back to your web browser! Launching a gui application from Matrix requires you to still look at the display device connected to the target system.

Matrix Project Webpage

The offical website for Matrix is located at **gforge.ti.com/gf/project/matrix-gui-v2/** Any comments or bug reports for Matrix should be posted there.

How to Use the Matrix

Matrix is based on HTML 5 and is designed to be easily customizable. All applications and submenus for Matrix can be found in the target system’s usr/share/matrix-gui-2.0/apps/ directory. Matrix utilizes the .desktop standard along with some additional parameters to easily allow modifying, adding and removing an application or directory.

Matrix Components

Below is a summary of all the Matrix web pages:

  • Contains all the directories or application that belongs to each directory level.

Application Description

  • Optional and associated with a particular application.
  • Provide additional information which can be useful for various reasons
  • Displayed when the associated application icon is pressed.

Example Application Description Page

Below is an example application description page. Description pages can be used to add additional information that may not be obvious.

../_images/Screenshot-2.png

Coming Soon Page

  • Displayed for Matrix directories that doesn’t contain any applications within it.

Application/Script Execution Page

  • For console based application, displays the output text of the application

Icons

  • 96x96 png image files which are associated to a submenu or an application.
  • Can be re-used by many applications

Applications

  • Any application can be launched by Matrix
  • Local Matrix uses the graphics display layer. If a launched application also uses the graphics display layer there will be a conflict.

Updating Matrix

Matrix 2 utilizes a caching system that caches the information read from the .desktop files and also the html that is generated from the various php pages. While this provides a substantial performance boost, developers must be aware that any changes to the Matrix apps folder which includes adding, deleting and modifying files can result in many problems within Matrix. To properly update Matrix with the latest information, Matrix’s caches need to be cleared.

Automatically Clearing Matrix Cache

The simpliest way to clear Matrix’s cache is to use the Refresh Matrix application found within Matrix’s Settings submenu. Simply running the application will cause Matrix to clear all the cached files and regenerate the .desktops cache file. Once the application is done running, Matrix will be updated with the latest information found from within the apps folder.

Manually Clearing Matrix Cache

Matrix caching system consists of 1 file and 1 directory. Within Matrix’s root directory there contains a file called json.txt. Json.txt is a JSON file that contains information gathered from all the .desktops located within the apps directory. This file is generated by executing the generate.php file.

Also located in Matrix’s root directory is a folder called cache. This folder contains all of the html files cached from the various dynamic php webpages.

To clear Matrix’s caches you need to perform only two steps:

  1. Execute the generate.php file.

In the terminal of the target system, enter the folllowing line of code.

php generate.php

or

In a browser enter the following url. Note replace <target ip> with the IP address of the target system.

http://<target ip>:80/generate.php

Viewing generate.php in the browser should display a blank page. There is no visual output to this webpage.

2. You need to clear the files located within Matrix’s cache folder. Enter the following commands.

cd /usr/share/matrix-gui-2.0/cache
rm -r *

Once the above steps are completed, Matrix will be updated.

Launching Matrix

Use the following shell script in the target’s terminal window to run Matrix as a background task:

/etc/init.d/matrix-gui-2.0 start

This script ensures that the touchscreen has been calibrated and that the Qt Window server is running.

Alternatively, Matrix can be launched manually with this full syntax:

matrix_browser  -qws http://localhost:80

The “-qws” parameter is required to start the Qt window server if this is the only/first Qt application running on the system.

The third parameter is the URL that you want the application’s web browser to go to. http://localhost:80 points to the web server on the target system that is hosting Matrix.

Matrix Startup Debug

The following topics cover debugging Matrix issue at startup or disabling Matrix at start up.

Touchscreen not working

Please see this wiki page to recalibrate the touch screen: **How to Recalibrate the Touchscreen**

Matrix is running but I don’t want it running

  1. Exit Matrix by going to the Settings submenu and running the Exit Matrix application. Note that exiting Matrix only shuts down local Matrix. Remote Matrix can still be used.
  2. Or if the touchscreen is not working, from the console, type:
/etc/init.d/matrix-gui-2.0 stop

I don’t want Matrix to run on boot up

From the console type the following commands:

cd /etc/rc5.d
mv S97matrix-gui-2.0 K97matrix-gui-2.0
This will cause local Matrix to not automatically start on boot up.

How to Enable Mouse Instead of Touchscreen for the Matrix

You can enable mouse by referring to the following: **How to Enable Mouse for the Matrix GUI*** *

How to Switch Display from LCD to DVI out for the Matrix

You can switch the display output by referring to the following: **How to Switch Display Output for the Matrix GUI*** *

Adding a New Application/Directory to Matrix

Below are step by step instructions.

  1. Create a new folder on your target file system at /usr/share/matrix-gui-2.0/apps/. The name should be a somewhat descriptive representation of the application or directory. The folder name must be different than any existing folders at that location.
  2. Create a .desktop file based on the parameters discussed below. It is recommended the name of the desktop file match the name of the newly created folder. No white spaces can be used for the .desktop filename. The .desktop file parameters should be set depending on if you want to add a new application or a new directory to Matrix. The Type field must be set according to your decision. The .desktop file must have the .desktop suffix.
  3. Update the Icon field in the .desktop to reference any existing Icon in the /usr/share/matrix-gui-2.0 directory or subdirectories. You can also add a new 96x96 png image and place it into your newly created folder.
  4. Optionally for applications you can add a HTML file that contains the application description into your newly created directory. If you add a description page then update the X-Matrix-Description field in the .desktop file.
  5. Refresh Matrix using the application “Refresh Matrix” located in the Settings submenu.

Run your new application from Matrix! See reference examples below: **Examples**

Blank template icons for Matrix can be found here: **gforge.ti.com/gf/download/frsrelease/712/5167/blank_icons_1.1.tar.gz**

Creating the .Desktop File

The .desktop file is based on standard specified at the **standards.freedesktop.org/desktop-entry-spec/latest/** Additional fields were added that are unique for Matrix.

Format for each parameter:

<Field>=<Value>

The fields and values are case sensitive.

Examples

Creating a New Matrix Directory

You can get all the files including the image discussed below from the following file: **Ex_directory.tar.gz**

Create a directory called ex_directory

Create a new file named hello_world_dir.desktop

Fill the contents of the file with the text shown below:

#!/usr/bin/env xdg-open
[Desktop Entry]
Name=Ex Demo
Icon=/usr/share/matrix-gui-2.0/apps/ex_directory/example-icon.png
Type=Directory
X-MATRIX-CategoryTarget=ex_dir
X-MATRIX-DisplayPriority=5

This .desktop above tells Matrix that this .desktop is meant to create a new directory since Type=Directory. The directory should be named “Ex Demo” and will use the icon located within the ex_directory directory. This new directory should be the 5th icon displayed as long as there aren’t any other .desktop files that specify X-MATRIX-DisplayPriority=5 and will be displayed in the Matrix Main Menu. Now any applications that wants to be displayed in this directory should have their .desktop Category parameter set to ex_dir.

  • Note that sometimes Linux will rename the .desktop file to the name specified in the Name field. If this occurs don’t worry about trying to force it to use the file name specified.
  • If you are writing these files in Windows, be sure to use Unix-style EOL characters

Now move the .desktop file and image into the ex_directory directory that was created.

Moving the Newly created Directory to the Target’s File System

Open the Linux terminal and go to the directory that contains the ex_directory.

Enter the below command to copy ex_directory to the /usr/share/matrix-gui-2.0/apps/ directory located in the target’s file system. Depending on the targetNFS directory premissions you might have to include sudo before the cp command.

host $ cp ex_directory ~/ti-processor-sdk-linux-[platformName]-evm-xx.xx.xx.xx/targetNFS/usr/share/matrix-gui-2.0/apps/

If NFS isn’t being used then you need to copy the ex_directory to the the /usr/share/matrix-gui-2.0/apps/ directory in the target’s filesystem.

Updating Matrix

Now in either local or remote Matrix go to the Settings directory and click on and then run the Refresh Matrix application. This will delete all the cache files that Matrix generates and regenerates all the needed files which will include any updates that you have made.

Now if you go back to Matrix’s Main Menu the 5th icon should be the icon for your Ex Demo.

Creating a New Application

This example is assuming that you completed the **Creating a New Matrix Directory** example.

You can get all the files including the image discussed below from the following file: **Ex_application.tar.gz*** *

Create a new directory called ex_application

Create a file named test.desktop

Fill the contents of the file with the below text:

#!/usr/bin/env xdg-open
[Desktop Entry]
Name=Test App
Icon=/usr/share/matrix-gui-2.0/apps/ex_application/example-icon.png
Exec=/usr/share/matrix-gui-2.0/apps/ex_application/test_script.sh
Type=Application
ProgramType=console
Categories=ex_dir
X-Matrix-Description=/usr/share/matrix-gui-2.0/apps/ex_application/app_desc.html
X-Matrix-Lock=test_app_lock

Type=Application lets Matrix know that this .desktop is for an application. The name of the application is “Test App”. The icon example-icon.png can be found within the ex_application directory. The command to execute is a shell script that will be located within ex_application. The script that is being ran is a simply shell script that output text to the terminal. Therefore, the ProgramType should be set to console. This application should be added to the Ex Demo directory from the previous example. Therefore, Categories will be set to ex_dir which is the same value that X-MATRIX-CategoryTarget is set to. You could optionally remove the Categories field to have this application displayed in Matrix’s Main Menu. This application will also have a description page. The html file to be used is located within the ex_application directory. A lock is also being used. Therefore, any other application including itself that has the same lock can’t run simultaneously.

Create a file named test_script.sh

echo "You are now running you first newly created application in Matrix"
echo "I am about to go to sleep for 30 seconds so you can test out the lock feature if you want"
sleep 30
echo "I am finally awake!"
The newly created script needs to have its permission set to be executable. Enter the below command to give read, write and execute permission to all users and groups for the script:
host $ chmod 777 test_script.sh

Create a new file called app_desc.html

<h1>Test Application Overview</h1>
<h2>Purpose:</h2>
<p>The purpose of this application is to demonstrate the ease in adding a new application to Matrix.</p>

Now move the .desktop file, script file, the png image located in the Ex_application.tar.gz file and the html file into the ex_application folder.

Moving the newly created Directory to the Target System

Open the Linux terminal and go to the directory that contains the ex_application directory.

Enter the below command to copy the ex_application directory to /usr/share/matrix-gui-2.0/apps/ located in the target’s file system. Depending on the targetNFS directory permissions you might have to include sudo before the cp command.

host $ cp ex_application ~/ti-processor-sdk-linux-[platformName]-evm-xx.xx.xx.xx/targetNFS/usr/share/matrix-gui-2.0/apps/

If your not using NFS but instead are using a SD card then copy ex_application into the /usr/share/matrix-gui-2.0/apps/ directory in the target’s filesystem.

Updating Matrix

Now in either local or remote Matrix go to the Settings directory and click and then run the Refresh Matrix application. This will delete all the cache files that Matrix generates and regenerate all the needed files which will include any updates that you have made.

Now if you go back to the Matrix’s Main Menu and click on the Ex Demo directory you should see your newly created application. Click on the application’s icon and you will see the application’s description page. Click the Run button and your application will execute. If you try to run two instances of this application simultaneously via local and remote Matrtix you will get a message saying that the program can’t run because a lock exists. Because of X-Matrix-Lock being set to test_app_lock, Matrix knows not to run two instances of a program simultaneously that share the same lock. You can run the application again when the previous application is done running.

You have just successfully added a new application to Matrix using all the possibly parameters!

5.2. Sub-system Demos

5.2.1. Power Management

Overview

This page is the top level page for support of Power Management topics related to Sitara devices.

Please follow the appropriate link below to find information specific to your device.

Supported Devices

AM335x Power Management User Guide

  • Note: BeagleBone users click here.

5.2.1.1. AM335x Power Management User Guide

Overview

This article provides a description of the example applications under the Power page of the Matrix application that comes with the Sitara SDK.  This page is labled “Power” in the top-level Matrix GUI. The location of the Power icon on the main Matrix app list may be different than shown here, depending on screen size. (Screen shots from SDK 06.00)


../_images/Matrix_app_launcher.png

PLEASE NOTE: cpufreq may cause I2C lockups on AM335x EVM boards. Beaglebone is not affected. This is a known issue related to the CPLD firmware. If the CPLD firmware on your EVM is detected to be the wrong version, the Matrix application output will inform you of the version mismatch and continue.
Once updated CPLD firmware is available, this documentation will be updated to teach users how to upgrade their CPLD if necessary/desired. This procedure will require an Altera programming pod.

Power Examples

Several power examples exist to provide users the ability to dynamically switch the CPU clock frequency. The frequencies shown are those available for your system. Upon making a selection, you will be presented a confirmation page. The readout number “BogoMIPS” will confirm the new clock frequency. Please note that the frequency will read out with a slight margin compare to the intended frequency. For example, if you select 1GHz, you may see a number like 998.84 (in MHz). This is normal. After reviewing the confirmation page, press the Close button to return to normal Matrix operation.

Other power examples are provided which may be useful for power management developers and power users. These have been included in Matrix in part to make users aware that these valuable debugging tools exist, in addition to the convenience of executing each application from the GUI. In depth descriptions for each application follow. Similar descriptions are also given via description pages in Matrix, which will be displayed when clicking the button. Where appropriate, the documentation will point out the corresponding command line operation.

The Suspend/Resume button demonstrates the ability to put the machine into a suspended state. See below for complete documentation of this feature.

Please note that the order of applications which appear on your screen may differ from the picture below, due to devices with different screen sizes, and differences between different versions of Matrix. Screen shot is from SDK 06.00.


../_images/AM335x_Power_screen.png

Set Frequency

This command opens up another screen from which you choose the frequency based on the available frequencies on the board. Here is a picture of the screen you will see:


../_images/AM335x_matrix_set_frequency.png

The following are the Linux command line equivalents for selecting the operating frequency. Please note that changing the frequency also changes the MPU voltage accordingly. The commands are part of the “cpufreq” kernel interface for selecting the OPP (operating performance point). Cpufreq provides an opportunity to save power by adjusting/scaling voltage and frequency based on the current cpu load.

 (command line equivalent)
cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_available_frequencies
(view options, select one for next step)
echo <selected frequency, in KHz> > /sys/devices/system/cpu/cpu0/cpufreq/scaling_setspeed
cat /proc/cpuinfo

Suspend/Resume

 (command line equivalent)
mkdir /debug
mount -t debugfs debugfs /debug
echo mem > /sys/power/state
This command sequence will put the platform into suspend mode. The final command initiates the suspend.
IMPORTANT NOTE: When running this from Matrix, the system will only properly resume if the user sends a keypress to the UART. If the user presses the touchscreen or a button on the EVM, resume will not complete normally. This issue will be fixed in a future release. If you run these commands from the terminal - all of the normal wakeup events (UART keypress, touchscreen press, EVM keypad press) will operate correctly.

SmartReflex

SmartReflex is an active power management technique which optimizes voltage based on silicon process (“hot” vs. “cold” silicon), temperature, and silicon degradation effects. In most cases, SmartReflex provides significant power savings by lowering operating voltage.

On AM335x, SmartReflex is enabled by default in Sitara SDK releases since 05.05.00.00. Please note that the kernel configuration menu presents two options: “AM33XX SmartReflex support” and “SmartReflex support”. For AM33XX SmartReflex, you must select “AM33XX SmartReflex support”, and ensure that the “SmartReflex support” option is disabled. The latter option is intended for AM37x and OMAP3 class devices.

The SmartReflex driver requires the use of either the TPS65217 or TPS65910 PMIC. Furthermore, SmartReflex is currently supported only on the ZCZ package. Please note that SmartReflex may not operate on AM335x sample devices which were not programmed with voltage targets. To disable SmartReflex, type the following commands at the target terminal:

mkdir /debug
mount -t debugfs debugfs /debug
cd /debug/smartreflex   ==> NOTE: You may not see 'smartreflex' node if you have early silicon.  In this case SmartReflex operation is not possible.
echo 0 > autocomp
(Performing "echo 1 > autocomp" will re-enable SmartReflex)

On AM335x, to compile SmartReflex support out of the kernel, follow this procedure to modify the kernel configuration:

cd <kernel source directory>
make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- menuconfig
<the menuconfig interface should appear>
Select "System Type"
Select "TI OMAP Common Features"
Deselect "AM33xx SmartReflex Support"
Select "Exit" until you are prompted to save the configuration changes, and save them.
Rebuild the kernel.

Dynamic Frequency Scaling

This feature, which can be enabled via patch to the SDK, enables scaling frequency INDEPENDENT of voltage. It is also referred to as DFS (as in DVFS without the ‘V’).

Media:0001-Introduce-dynamic-frequency-scaling.patch

Discussion

Certain systems are unable to scale voltage, either because they employ a fixed voltage regulator, or use the ZCE package of AM335x. Without being able to scale voltage, the power savings enabled via DVFS are lost. This is because the current version of the omap-cpufreq driver requires a valid MPU voltage regulator in order to operate. The purpose of this DFS feature is to enable additional power savings for systems with these sort of limitations.

When using the ZCE package of AM335x, the CORE and MPU voltage domains are tied together. Due to Advisory 1.0.22, you are not allowed to dynamically modify the CORE frequency/voltage because the EMIF cannot support it. However, to achieve maximum power savings, it may still be desirable to use a PMIC which supports dynamic voltage scaling, in order to use Adaptive Voltage Scaling (aka SmartReflex or AVS). This implementation of DFS does not affect the ability of AVS to optimize the voltage and save additional power.

Using the patch

The patch presented here has been developed for and tested on the SDK 05.07. It modifies the omap-cpufreq driver to operate without requiring a valid MPU voltage regulator. From a user perspective, changing frequency via cpufreq is accomplished with exactly the same commands as typical DVFS. For example, switching to 300 MHz is accomplished with the following command:

echo 300000 > /sys/devices/system/cpu/cpu0/cpufreq/scaling_setspeed

After applying the patch, the user must modify the kernel defconfig in order to enable the DFS feature. You should also configure the “Maximum supported DFS voltage” (shown below) to whatever the fixed voltage level is for your system, in microvolts. For example, use the value 1100000 to signify 1.1V. The software will use the voltage level that you specify to automatically disable any Operating Performance Points (OPPs) which have voltages above that level.

On AM335x, first apply the patch, then follow this procedure to modify the kernel configuration:

cd <kernel source directory>
make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- menuconfig
<the menuconfig interface should appear>
Select "System Type"
Select "TI OMAP Common Features"
Select "Dynamic Frequency Scaling"
Configure "Maximum supported DFS voltage (in microvolts)" (default is 1100000, or 1.1V)
Select "Exit" until you are prompted to save the configuration changes, and save them.
Rebuild the kernel.

Power Savings

  • Tested on a rev 1.2 EVM, running Linux at idle.
  • The delta between power consumption at 300MHz and 600MHz, with voltage unchanged, is approximately 75mW.

Static CORE OPP 50

Configuring the AM335x system to CORE OPP50 frequency and voltage is an advanced power savings method that can be used, provided that you understand the tradeoffs involved.

This patch, which was developed against the u-boot source tree from the SDK 05.07, configures the bootloader to statically program the system to CORE OPP50 voltage (0.95V) and frequencies. It also configures the MPU to OPP50 voltage (0.95V) and frequency (300MHz). DDR2 is configured with optimized timings to run at 125MHz.

Apply the following patch to your u-boot source tree and rebuild both MLO and u-boot.img. (Refer to AM335x_U-Boot_User’s_Guide#Building_U-Boot)

Media:0001-Static-CORE-OPP50-w-DDR2-125MHz-MPU-300MHz.patch

Caveats

  • According to section 5.5.1 of the AM335x datasheet, operation of the Ethernet MAC and switch (CPSW) is NOT supported for CORE OPP50.
  • Note that MPU OPP50 operation is not supported for the 1.0 silicon revision (silicon errata Advisory 1.0.15).
  • Also be aware of Advisory 1.0.24, which states that boot may not be reliable because OPP100 frequencies are used by ROM at OPP50 voltages.
  • DDR2 memory must be used (as on the AM335x EVM up to rev 1.2). DDR2 memory timings must be modified to operate at 125MHz.

Power Savings

  • On an EVM (rev 1.2), active power consumption when Linux is idle for CORE and MPU rails was measured at 150mW. Using the out-of-the-box SDK at OPP100 (MPU and CORE), the comparable figure is 334mW.
  • Further savings are possible by disabling Ethernet drivers in the Linux defconfig. Refer to AM335x_CPSW_(Ethernet)_Driver’s_Guide#Driver_Configuration and disable “Ethernet driver support” to acheive additional power savings.

Power Management Reference

Refer to this page for Linux specific information on AM335x devices.

The Power Estimation Tool (PET) provides users the ability to gain insight in to the power consumption of select Sitara processors.

This document discusses the power consumption for common system application usage scenarios for the AM335x ARM® Cortex™-A8 Microprocessors (MPUs).

Standby for AM335x is a inactive (system suspended) power saving mode in which the power savings achieved would be lesser than that achieved through DeepSleep0 mode but with lesser resume latency and additional wake-up sources.

5.2.2. ARM Multimedia Users Guide

Overview


Multimedia codecs on ARM based platforms could be optimised for better performance using the tightly coupled **Neon** co-processor. Neon architecture works with its own independent pipeline and register file. Neon technology is a 128 bit SIMD architecture extension for ARM Cortex-A series processors. It is designed to provide acceleration for multimedia applications.

Supported Platforms

  • AM37x
  • Beagleboard-xM
  • AM35x
  • AM335x EVM
  • AM437x GP EVM
  • AM57xx GP EVM

Multimedia on AM57xx Processor

On AM57xx processor, ARM offloads H.264, VC1, MPEG-4, MPEG-2 and MJPEG codecs processing to IVA-HD hardware accelerator. Please refer to AM57xx Multimedia Training guide to learn more on AM57xx multimedia capabilities, demos, software stack, gstreamer plugins and pipelines. Also refer to AM57xx Graphics Display Getting Started Guide to learn on AM57xx graphics software architecture, demos, tools and display applications.

Multimedia on Cortex-A8

Cortex-A8 Features and Benefits

  • Support ARM v7 with Advanced SIMD (NEON)
  • Support hierarchical cache memory
  • Up to 1 MB L2 cache
  • Up to 128-bit memory bandwidth
  • 13-stage pipeline and enhanced branch prediction engine
  • Dual-issue of instructions

Neon Features and Benefits

  • Independent HW block to support advanced SIMD instructions
  • Comprehensive instruction set with support of 8, 16 & 32-bit signed & unsigned data types
  • 256 byte register file (dual 32x64/16x128 view) with hybrid 32/64/128 bit modes
  • Large register files enables efficient data handling and minimizes access to memory, thus enhancing data throughput
  • Processor can sleep sooner which leads to an overall dynamic power saving
  • Independent 10-stage pipeline
  • Dual-issue of limited instruction pairs
  • Significant code size reduction

Neon support on opensource community

NEON is currently supported in the following Open Source projects.


  • ffmpeg/libav
    • LGPL media player used in many Linux distros
    • NEON Video: MPEG-4 ASP, H.264 (AVC), VC-1, VP3, Theora
    • NEON Audio: AAC, Vorbis, WMA
  • x264 –Google Summer Of Code 2009
    • GPL H.264 encoder –e.g. for video conferencing
  • Bluez –official Linux Bluetooth protocol stack
    • NEON sbc audio encoder
  • Pixman (part of cairo 2D graphics library)
    • Compositing/alpha blending
    • X.Org, Mozilla Firefox, fennec, & Webkit browsers
    • e.g. fbCompositeSolidMask_nx8x0565neon 8xfaster using NEON
  • Ubuntu 09.04 & 09.10 –fully supports NEON
    • NEON versions of critical shared-libraries
  • Android –NEON optimizations
    • Skia library, S32A_D565_Opaque 5xfaster using NEON
    • Available in Google Skia tree from 03-Aug-2009

For additional details, please refer the **NEON - ARM website**.

SDK Example Applications

This application can be executed by selecting the “Multimedia” icon at the top-level matrix.

NOTE

The very first GStreamer launch takes some time to initialize outputs or set up decoders.


../_images/Main_screen.png

Codec portfolio

Processor SDK includes ARM based multimedia using opensource GPLv2+ FFmpeg/Libav codecs, the codec portfolio includes MPEG-4, H.264 for video in VGA/WQVGA/480p resolution and AAC codec for audio. Codec portforlio for Processor SDK on AM57xx device is listed here

The script file to launch multimedia demo detects the display enabled and accordingly decodes VGA or 480p video. In AM37x platform VGA clip is decoded when LCD is enabled and 480p is decoded when DVI out is enabled. Scripts in “Settings” menu can be used to switch between these two displays.

MPEG4 + AAC Decode

MPEG-4 + AAC Dec example application demonstrates use of MPEG-4 video and AAC audio codec as mentioned in the description page below.


../_images/Mpeg4aac.png

The multimedia pipeline is constructed using gst-launch, GStreamer elements such as qtdemux is used for demuxing AV content. Parsers are elements with single source pad and can be used to cut streams into buffers, they do not modify the data otherwise.

gst-launch-0.10 filesrc location=$filename ! qtdemux name=demux demux.audio_00 ! queue ! ffdec_aac ! alsasink sync=false demux.video_00 ! queue ! ffdec_mpeg4 ! ffmpegcolorspace ! fbdevsink device=/dev/fb0

“filename” is defined based on the selected display device which could be LCD of DVI.

MPEG4 Decode

MPEG-4 decode example application demonstrates use of MPEG-4 video codec as mentioned in the description page below.


../_images/Mpeg4.png

gst-launch-0.10 filesrc location=$filename ! mpeg4videoparse ! ffdec_mpeg4 ! ffmpegcolorspace ! fbdevsink device=/dev/fb0

H.264 Decode

H.264 decode example application demonstrates use of H.264 video codec as mentioned in the description page below.


../_images/H264.png

gst-launch-0.10 filesrc location=$filename ! h264parse ! ffdec_h264 ! ffmpegcolorspace ! fbdevsink device=/dev/fb0

AAC Decode

AAC decode example application demonstrates use of AAC video codec as mentioned in the description page below.


../_images/Aac.png

gst-launch-0.10 filesrc location=$filename ! aacparse ! faad ! alsasink

Streaming

Audio/Video data can be streamed from a server using souphttpsrc. For example to stream audio content, if you set-up an apache server on your host machine you can stream the audio file HistoryOfTI.aac located in the files directory using the pipeline

gst-launch souphttpsrc location=http://<ip address>/files/HistoryOfTI.aac ! aacparse ! faad ! alsasink

Multimedia Peripheral Examples

Examples of how to use several different multimedia peripherals can be found on the ARM Multimedia Peripheral Examples page.


SDK Multimedia Framework

Multimedia framework for cortex-a8 SDK will leverage GStreamer multimedia stack with gst-ffmpeg plug-in’s to support GPLv2+ FFmpeg/libav library code.

../_images/SDKMMFwk.png

gst-launch is used to build and run basic multimedia pieplines to demonstrate audio/avideo decoding examples.

../_images/MMFwk.png

GStreamer

  • Multimedia processing library
  • Provides uniform framework across platforms
  • Includes parsing & A/V sync support
  • Modular with flexibility to add new functionality via plugins
  • Easy bindings to other frameworks

Some of the build dependencies for GStreamer are shown here:

../_images/GstBuildDependancies.png

Open Source FFmpeg Codecs

**FFmpeg** is an open source project which provides a cross platform multimedia solution.

  • Free audio and video decoder/encoder code licensed under GPLv2+ (GPLv3 licensed codecs can be build separately)
  • A comprehensive suite of standard compliant multimedia codecs
- Audio
- Video
- Image
- Speech
  • Codec software package
  • Codec libraries with standard C based API
  • Audio/Video parsers that support popular multimedia content
  • Use of SIMD/NEON instructions **cortex-A8 neon architecture**
  • Neon provides 1.6x-2.5x performance on complex video codecs

Multimedia Neon Benchmark

Test Parameters:

  • Sep 21 2009 snapshot of gst-ffmpeg.org
  • Real silicon measurements on Omap3 Beagleboard
Resolution 480x270
Frame Rate 30fps
Audio 44.1KHz
Video Codec H.264
Audio Codec AAC
  • Benchmarks released by ARM demonstrating an overall performance improvement of ~2x.
../_images/NeonPerf.png

FFmpeg Codecs List

FFmpeg Codec Licensing

FFmpeg libraries include LGPL, GPLv2, GPLv3 and other license based codecs, enabling GPLv3 codecs subjects the entire framework to GPLv3 license. In the Sitara SDK GPLv2+ licensed codecs are enabled. Enabling Additional details of **legal and license** of these codecs can be found on the FFmpeg/libav webpage.

GPLv2+ codecs list

Codec Description
ffenc_a64multi FFmpeg Multicolor charset for Commodore 64 encoder
ffenc_a64multi5 FFmpeg Multicolor charset for Commodore 64, extended with 5th color (colram) encoder
ffenc_asv1 FFmpeg ASUS V1 encoder
ffenc_asv2 FFmpeg ASUS V2 encoder
ffenc_bmp FFmpeg BMP image encoder
ffenc_dnxhd FFmpeg VC3/DNxHD encoder
ffenc_dvvideo FFmpeg DV (Digital Video) encoder
ffenc_ffv1 FFmpeg FFmpeg video codec #1 encoder
ffenc_ffvhuff FFmpeg Huffyuv FFmpeg variant encoder
ffenc_flashsv FFmpeg Flash Screen Video encoder
ffenc_flv FFmpeg Flash Video (FLV) / Sorenson Spark / Sorenson H.263 encoder
ffenc_h261 FFmpeg H.261 encoder
ffenc_h263 FFmpeg H.263 / H.263-1996 encoder
ffenc_h263p FFmpeg H.263+ / H.263-1998 / H.263 version 2 encoder
ffenc_huffyuv FFmpeg Huffyuv / HuffYUV encoder
ffenc_jpegls FFmpeg JPEG-LS encoder
ffenc_ljpeg FFmpeg Lossless JPEG encoder
ffenc_mjpeg FFmpeg MJPEG (Motion JPEG) encoder
ffenc_mpeg1video FFmpeg MPEG-1 video encoder
ffenc_mpeg4 FFmpeg MPEG-4 part 2 encoder
ffenc_msmpeg4v1 FFmpeg MPEG-4 part 2 Microsoft variant version 1 encoder
ffenc_msmpeg4v2 FFmpeg MPEG-4 part 2 Microsoft variant version 2 encoder
ffenc_msmpeg4 FFmpeg MPEG-4 part 2 Microsoft variant version 3 encoder
ffenc_pam FFmpeg PAM (Portable AnyMap) image encoder
ffenc_pbm FFmpeg PBM (Portable BitMap) image encoder
ffenc_pcx FFmpeg PC Paintbrush PCX image encoder
ffenc_pgm FFmpeg PGM (Portable GrayMap) image encoder
ffenc_pgmyuv FFmpeg PGMYUV (Portable GrayMap YUV) image encoder
ffenc_png FFmpeg PNG image encoder
ffenc_ppm FFmpeg PPM (Portable PixelMap) image encoder
ffenc_qtrle FFmpeg QuickTime Animation (RLE) video encoder
ffenc_roqvideo FFmpeg id RoQ video encoder
ffenc_rv10 FFmpeg RealVideo 1.0 encoder
ffenc_rv20 FFmpeg RealVideo 2.0 encoder
ffenc_sgi FFmpeg SGI image encoder
ffenc_snow FFmpeg Snow encoder
ffenc_svq1 FFmpeg Sorenson Vector Quantizer 1 / Sorenson Video 1 / SVQ1 encoder
ffenc_targa FFmpeg Truevision Targa image encoder
ffenc_tiff FFmpeg TIFF image encoder
ffenc_wmv1 FFmpeg Windows Media Video 7 encoder
ffenc_wmv2 FFmpeg Windows Media Video 8 encoder
ffenc_zmbv FFmpeg Zip Motion Blocks Video encoder
ffenc_aac FFmpeg Advanced Audio Coding encoder
ffenc_ac3 FFmpeg ATSC A/52A (AC-3) encoder
ffenc_alac FFmpeg ALAC (Apple Lossless Audio Codec) encoder
ffenc_mp2 FFmpeg MP2 (MPEG audio layer 2) encoder
ffenc_nellymoser FFmpeg Nellymoser Asao encoder
ffenc_real_144 FFmpeg RealAudio 1.0 (14.4K) encoder encoder
ffenc_sonic FFmpeg Sonic encoder
ffenc_sonicls FFmpeg Sonic lossless encoder
ffenc_wmav1 FFmpeg Windows Media Audio 1 encoder
ffenc_wmav2 FFmpeg Windows Media Audio 2 encoder
ffenc_roq_dpcm FFmpeg id RoQ DPCM encoder
ffenc_adpcm_adx FFmpeg SEGA CRI ADX ADPCM encoder
ffenc_g722 FFmpeg G.722 ADPCM encoder
ffenc_g726 FFmpeg G.726 ADPCM encoder
ffenc_adpcm_ima_qt FFmpeg ADPCM IMA QuickTime encoder
ffenc_adpcm_ima_wav FFmpeg ADPCM IMA WAV encoder
ffenc_adpcm_ms FFmpeg ADPCM Microsoft encoder
ffenc_adpcm_swf FFmpeg ADPCM Shockwave Flash encoder
ffenc_adpcm_yamaha FFmpeg ADPCM Yamaha encoder
ffenc_ass FFmpeg Advanced SubStation Alpha subtitle encoder
ffenc_dvbsub FFmpeg DVB subtitles encoder
ffenc_dvdsub FFmpeg DVD subtitles encoder
ffenc_xsub FFmpeg DivX subtitles (XSUB) encoder
ffdec_aasc FFmpeg Autodesk RLE decoder
ffdec_amv FFmpeg AMV Video decoder
ffdec_anm FFmpeg Deluxe Paint Animation decoder
ffdec_ansi FFmpeg ASCII/ANSI art decoder
ffdec_asv1 FFmpeg ASUS V1 decoder
ffdec_asv2 FFmpeg ASUS V2 decoder
ffdec_aura FFmpeg Auravision AURA decoder
ffdec_aura2 FFmpeg Auravision Aura 2 decoder
ffdec_avs FFmpeg AVS (Audio Video Standard) video decoder
ffdec_bethsoftvid FFmpeg Bethesda VID video decoder
ffdec_bfi FFmpeg Brute Force & Ignorance decoder
ffdec_binkvideo FFmpeg Bink video decoder
ffdec_bmp FFmpeg BMP image decoder
ffdec_c93 FFmpeg Interplay C93 decoder
ffdec_cavs FFmpeg Chinese AVS video (AVS1-P2, JiZhun profile) decoder
ffdec_cdgraphics FFmpeg CD Graphics video decoder
ffdec_cinepak FFmpeg Cinepak decoder
ffdec_cljr FFmpeg Cirrus Logic AccuPak decoder
ffdec_camstudio FFmpeg CamStudio decoder
ffdec_cyuv FFmpeg Creative YUV (CYUV) decoder
ffdec_dnxhd FFmpeg VC3/DNxHD decoder
ffdec_dpx FFmpeg DPX image decoder
ffdec_dsicinvideo FFmpeg Delphine Software International CIN video decoder
ffdec_dvvideo FFmpeg DV (Digital Video) decoder
ffdec_dxa FFmpeg Feeble Files/ScummVM DXA decoder
ffdec_eacmv FFmpeg Electronic Arts CMV video decoder
ffdec_eamad FFmpeg Electronic Arts Madcow Video decoder
ffdec_eatgq FFmpeg Electronic Arts TGQ video decoder
ffdec_eatgv FFmpeg Electronic Arts TGV video decoder
ffdec_eatqi FFmpeg Electronic Arts TQI Video decoder
ffdec_8bps FFmpeg QuickTime 8BPS video decoder
ffdec_8svx_exp FFmpeg 8SVX exponential decoder
ffdec_8svx_fib FFmpeg 8SVX fibonacci decoder
ffdec_escape124 FFmpeg Escape 124 decoder
ffdec_ffv1 FFmpeg FFmpeg video codec #1 decoder
ffdec_ffvhuff FFmpeg Huffyuv FFmpeg variant decoder
ffdec_flashsv FFmpeg Flash Screen Video v1 decoder
ffdec_flic FFmpeg Autodesk Animator Flic video decoder
ffdec_flv FFmpeg Flash Video (FLV) / Sorenson Spark / Sorenson H.263 decoder
ffdec_4xm FFmpeg 4X Movie decoder
ffdec_fraps FFmpeg Fraps decoder
ffdec_FRWU FFmpeg Forward Uncompressed decoder
ffdec_h261 FFmpeg H.261 decoder
ffdec_h263 FFmpeg H.263 / H.263-1996, H.263+ / H.263-1998 / H.263 version 2 decoder
ffdec_h263i FFmpeg Intel H.263 decoder
ffdec_h264 FFmpeg H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 decoder
ffdec_huffyuv FFmpeg Huffyuv / HuffYUV decoder
ffdec_idcinvideo FFmpeg id Quake II CIN video decoder
ffdec_iff_byterun1 FFmpeg IFF ByteRun1 decoder
ffdec_iff_ilbm FFmpeg IFF ILBM decoder
ffdec_indeo2 FFmpeg Intel Indeo 2 decoder
ffdec_indeo3 FFmpeg Intel Indeo 3 decoder
ffdec_indeo5 FFmpeg Intel Indeo Video Interactive 5 decoder
ffdec_interplayvideo FFmpeg Interplay MVE video decoder
ffdec_jpegls FFmpeg JPEG-LS decoder
ffdec_kgv1 FFmpeg Kega Game Video decoder
ffdec_kmvc FFmpeg Karl Morton’s video codec decoder
ffdec_loco FFmpeg LOCO decoder
ffdec_mdec FFmpeg Sony PlayStation MDEC (Motion DECoder) decoder
ffdec_mimic FFmpeg Mimic decoder
ffdec_mjpeg FFmpeg MJPEG (Motion JPEG) decoder
ffdec_mjpegb FFmpeg Apple MJPEG-B decoder
ffdec_mmvideo FFmpeg American Laser Games MM Video decoder
ffdec_motionpixels FFmpeg Motion Pixels video decoder
ffdec_mpeg4 FFmpeg MPEG-4 part 2 decoder
ffdec_mpegvideo FFmpeg MPEG-1 video decoder
ffdec_msmpeg4v1 FFmpeg MPEG-4 part 2 Microsoft variant version 1 decoder
ffdec_msmpeg4v2 FFmpeg MPEG-4 part 2 Microsoft variant version 2 decoder
ffdec_msmpeg4 FFmpeg MPEG-4 part 2 Microsoft variant version 3 decoder
ffdec_msrle FFmpeg Microsoft RLE decoder
ffdec_msvideo1 FFmpeg Microsoft Video 1 decoder
ffdec_mszh FFmpeg LCL (LossLess Codec Library) MSZH decoder
ffdec_nuv FFmpeg NuppelVideo/RTJPEG decoder
ffdec_pam FFmpeg PAM (Portable AnyMap) image decoder
ffdec_pbm FFmpeg PBM (Portable BitMap) image decoder
ffdec_pcx FFmpeg PC Paintbrush PCX image decoder
ffdec_pgm FFmpeg PGM (Portable GrayMap) image decoder
ffdec_pgmyuv FFmpeg PGMYUV (Portable GrayMap YUV) image decoder
ffdec_pictor FFmpeg Pictor/PC Paint decoder
ffdec_png FFmpeg PNG image decoder
ffdec_ppm FFmpeg PPM (Portable PixelMap) image decoder
ffdec_ptx FFmpeg V.Flash PTX image decoder
ffdec_qdraw FFmpeg Apple QuickDraw decoder
ffdec_qpeg FFmpeg Q-team QPEG decoder
ffdec_qtrle FFmpeg QuickTime Animation (RLE) video decoder
ffdec_r10k FFmpeg AJA Kona 10-bit RGB Codec decoder
ffdec_rl2 FFmpeg RL2 video decoder
ffdec_roqvideo FFmpeg id RoQ video decoder
ffdec_rpza FFmpeg QuickTime video (RPZA) decoder
ffdec_rv10 FFmpeg RealVideo 1.0 decoder
ffdec_rv20 FFmpeg RealVideo 2.0 decoder
ffdec_rv30 FFmpeg RealVideo 3.0 decoder
ffdec_rv40 FFmpeg RealVideo 4.0 decoder
ffdec_sgi FFmpeg SGI image decoder
ffdec_smackvid FFmpeg Smacker video decoder
ffdec_smc FFmpeg QuickTime Graphics (SMC) decoder
ffdec_snow FFmpeg Snow decoder
ffdec_sp5x FFmpeg Sunplus JPEG (SP5X) decoder
ffdec_sunrast FFmpeg Sun Rasterfile image decoder
ffdec_svq1 FFmpeg Sorenson Vector Quantizer 1 / Sorenson Video 1 / SVQ1 decoder
ffdec_svq3 FFmpeg Sorenson Vector Quantizer 3 / Sorenson Video 3 / SVQ3 decoder
ffdec_targa FFmpeg Truevision Targa image decoder
ffdec_thp FFmpeg Nintendo Gamecube THP video decoder
ffdec_tiertexseqvideo FFmpeg Tiertex Limited SEQ video decoder
ffdec_tiff FFmpeg TIFF image decoder
ffdec_tmv FFmpeg 8088flex TMV decoder
ffdec_truemotion1 FFmpeg Duck TrueMotion 1.0 decoder
ffdec_truemotion2 FFmpeg Duck TrueMotion 2.0 decoder
ffdec_camtasia FFmpeg TechSmith Screen Capture Codec decoder
ffdec_txd FFmpeg Renderware TXD (TeXture Dictionary) image decoder
ffdec_ultimotion FFmpeg IBM UltiMotion decoder
ffdec_vb FFmpeg Beam Software VB decoder
ffdec_vc1 FFmpeg SMPTE VC-1 decoder
ffdec_vcr1 FFmpeg ATI VCR1 decoder
ffdec_vmdvideo FFmpeg Sierra VMD video decoder
ffdec_vmnc FFmpeg VMware Screen Codec / VMware Video decoder
ffdec_vp3 FFmpeg On2 VP3 decoder
ffdec_vp5 FFmpeg On2 VP5 decoder
ffdec_vp6 FFmpeg On2 VP6 decoder
ffdec_vp6a FFmpeg On2 VP6 (Flash version, with alpha channel) decoder
ffdec_vp6f FFmpeg On2 VP6 (Flash version) decoder
ffdec_vp8 FFmpeg On2 VP8 decoder
ffdec_vqavideo FFmpeg Westwood Studios VQA (Vector Quantized Animation) video decoder
ffdec_wmv1 FFmpeg Windows Media Video 7 decoder
ffdec_wmv2 FFmpeg Windows Media Video 8 decoder
ffdec_wmv3 FFmpeg Windows Media Video 9 decoder
ffdec_wnv1 FFmpeg Winnov WNV1 decoder
ffdec_xan_wc3 FFmpeg Wing Commander III / Xan decoder
ffdec_xl FFmpeg Miro VideoXL decoder
ffdec_yop FFmpeg Psygnosis YOP Video decoder
ffdec_zlib FFmpeg LCL (LossLess Codec Library) ZLIB decoder
ffdec_zmbv FFmpeg Zip Motion Blocks Video decoder
ffdec_aac FFmpeg Advanced Audio Coding decoder
ffdec_aac_latm FFmpeg AAC LATM (Advanced Audio Codec LATM syntax) decoder
ffdec_ac3 FFmpeg ATSC A/52A (AC-3) decoder
ffdec_alac FFmpeg ALAC (Apple Lossless Audio Codec) decoder
ffdec_als FFmpeg MPEG-4 Audio Lossless Coding (ALS) decoder
ffdec_amrnb FFmpeg Adaptive Multi-Rate NarrowBand decoder
ffdec_ape FFmpeg Monkey’s Audio decoder
ffdec_atrac1 FFmpeg Atrac 1 (Adaptive TRansform Acoustic Coding) decoder
ffdec_atrac3 FFmpeg Atrac 3 (Adaptive TRansform Acoustic Coding 3) decoder
ffdec_binkaudio_dct FFmpeg Bink Audio (DCT) decoder
ffdec_binkaudio_rdft FFmpeg Bink Audio (RDFT) decoder
ffdec_cook FFmpeg COOK decoder
ffdec_dca FFmpeg DCA (DTS Coherent Acoustics) decoder
ffdec_dsicinaudio FFmpeg Delphine Software International CIN audio decoder
ffdec_eac3 FFmpeg ATSC A/52B (AC-3, E-AC-3) decoder
ffdec_flac FFmpeg FLAC (Free Lossless Audio Codec) decoder
ffdec_gsm FFmpeg GSM decoder
ffdec_gsm_ms FFmpeg GSM Microsoft variant decoder
ffdec_imc FFmpeg IMC (Intel Music Coder) decoder
ffdec_mace3 FFmpeg MACE (Macintosh Audio Compression/Expansion) 3
ffdec_mace6 FFmpeg MACE (Macintosh Audio Compression/Expansion) 6
ffdec_mlp FFmpeg MLP (Meridian Lossless Packing) decoder
ffdec_mp1float FFmpeg MP1 (MPEG audio layer 1) decoder
ffdec_mp2float FFmpeg MP2 (MPEG audio layer 2) decoder |
ffdec_mpc7 FFmpeg Musepack SV7 decoder
ffdec_mpc8 FFmpeg Musepack SV8 decoder
ffdec_nellymoser FFmpeg Nellymoser Asao decoder
ffdec_qcelp FFmpeg QCELP / PureVoice decoder
ffdec_qdm2 FFmpeg QDesign Music Codec 2 decoder
ffdec_real_144 FFmpeg RealAudio 1.0 (14.4K) decoder
ffdec_real_288 FFmpeg RealAudio 2.0 (28.8K) decoder
ffdec_shorten FFmpeg Shorten decoder
ffdec_sipr FFmpeg RealAudio SIPR / ACELP.NET decoder
ffdec_smackaud FFmpeg Smacker audio decoder
ffdec_sonic FFmpeg Sonic decoder
ffdec_truehd FFmpeg TrueHD decoder
ffdec_truespeech FFmpeg DSP Group TrueSpeech decoder
ffdec_tta FFmpeg True Audio (TTA) decoder
ffdec_twinvq FFmpeg VQF TwinVQ decoder
ffdec_vmdaudio FFmpeg Sierra VMD audio decoder
ffdec_wmapro FFmpeg Windows Media Audio 9 Professional decoder
ffdec_wmav1 FFmpeg Windows Media Audio 1 decoder
ffdec_wmav2 FFmpeg Windows Media Audio 2 decoder
ffdec_wmavoice FFmpeg Windows Media Audio Voice decoder
ffdec_ws_snd1 FFmpeg Westwood Audio (SND1) decoder
ffdec_pcm_lxf FFmpeg PCM signed 20-bit little-endian planar decoder
ffdec_interplay_dpcm FFmpeg DPCM Interplay decoder
ffdec_roq_dpcm FFmpeg DPCM id RoQ decoder
ffdec_sol_dpcm FFmpeg DPCM Sol decoder
ffdec_xan_dpcm FFmpeg DPCM Xan decoder
ffdec_adpcm_4xm FFmpeg ADPCM 4X Movie decoder
ffdec_adpcm_adx FFmpeg SEGA CRI ADX ADPCM decoder
ffdec_adpcm_ct FFmpeg ADPCM Creative Technology decoder
ffdec_adpcm_ea FFmpeg ADPCM Electronic Arts decoder
ffdec_adpcm_ea_maxis_xa FFmpeg ADPCM Electronic Arts Maxis CDROM XA decoder
ffdec_adpcm_ea_r1 FFmpeg ADPCM Electronic Arts R1 decoder
ffdec_adpcm_ea_r2 FFmpeg ADPCM Electronic Arts R2 decoder
ffdec_adpcm_ea_r3 FFmpeg ADPCM Electronic Arts R3 decoder
ffdec_adpcm_ea_xas FFmpeg ADPCM Electronic Arts XAS decoder
ffdec_g722 FFmpeg G.722 ADPCM decoder
ffdec_g726 FFmpeg G.726 ADPCM decoder
ffdec_adpcm_ima_amv FFmpeg ADPCM IMA AMV decoder
ffdec_adpcm_ima_dk3 FFmpeg ADPCM IMA Duck DK3 decoder
ffdec_adpcm_ima_dk4 FFmpeg ADPCM IMA Duck DK4 decoder
ffdec_adpcm_ima_ea_eacs FFmpeg ADPCM IMA Electronic Arts EACS decoder
ffdec_adpcm_ima_ea_sead FFmpeg ADPCM IMA Electronic Arts SEAD decoder
ffdec_adpcm_ima_iss FFmpeg ADPCM IMA Funcom ISS decoder
ffdec_adpcm_ima_qt FFmpeg ADPCM IMA QuickTime decoder
ffdec_adpcm_ima_smjpeg FFmpeg ADPCM IMA Loki SDL MJPEG decoder
ffdec_adpcm_ima_wav FFmpeg ADPCM IMA WAV decoder
ffdec_adpcm_ima_ws FFmpeg ADPCM IMA Westwood decoder
ffdec_adpcm_ms FFmpeg ADPCM Microsoft decoder
ffdec_adpcm_sbpro_2 FFmpeg ADPCM Sound Blaster Pro 2-bit decoder
ffdec_adpcm_sbpro_3 FFmpeg ADPCM Sound Blaster Pro 2.6-bit decoder
ffdec_adpcm_sbpro_4 FFmpeg ADPCM Sound Blaster Pro 4-bit decoder
ffdec_adpcm_swf FFmpeg ADPCM Shockwave Flash decoder
ffdec_adpcm_thp FFmpeg ADPCM Nintendo Gamecube THP decoder
ffdec_adpcm_xa FFmpeg ADPCM CDROM XA decoder
ffdec_adpcm_yamaha FFmpeg ADPCM Yamaha decoder
ffdec_ass FFmpeg Advanced SubStation Alpha subtitle decoder
ffdec_dvbsub FFmpeg DVB subtitles decoder
ffdec_dvdsub FFmpeg DVD subtitles decoder
ffdec_pgssub FFmpeg HDMV Presentation Graphic Stream subtitles decoder
ffdec_xsub FFmpeg XSUB decoder

Third Party Solutions

Third parties like Ittiam and VisualON provide highly optimized ARM only codecs on Linux, WinCE and Android OS.

Software Components & Dependencies

The following lists some of the software components and dependencies associated with the Sitara SDK.

Dependancies: Required packages to build Gstreamer on Ubuntu:

sudo apt-get install automake autoconf libtool docbook-xml docbook-xsl fop libxml2 gnome-doc-utils

  • build-essential
  • libtool
  • automake
  • autoconf
  • git-core
  • svn
  • liboil0.3-dev
  • libxml2-dev
  • libglib2.0-dev
  • gettext
  • corkscrew
  • socket
  • libfaad-dev
  • libfaac-dev

Software components for Sitara SDK Release:

  • glib
  • gstreamer
  • liboil
  • gst-plugins-good
  • gst-ffmpeg
  • gst-plugins-bad
  • gst-plugins-base

Re-enabling Mp3 and Mpeg2 decode in the Processor SDK

Starting with version 05.05.01.00, mp3 and mpeg2 codecs are no longer distributed as part of the SDK. These plugins can be re-enabled by the end user through rebuilding the gst-plugins-ugly package. The following instructions have been tested with gst-plugins-ugly-0.10.19 which can be found at **gstreamer.freedesktop.org**. Note that these instructions will work for any of the gstreamer plugin packages found in the sdk.

  • Source environment-setup at the terminal
  • Navigate into the example-applications path under the SDK install directory
  • Extract the GStreamer plug-in source archive
  • Navigate into the folder that was created
  • On the command line type ./configure --host=arm-arago-linux-gnueabi --prefix=/usr
  • Notice that some components are not built because they have dependencies that are not part of our SDK
  • Run make to build the plugins.
  • Run make install DESTDIR=<PATH TO TARGET ROOT>

5.2.3. Accelerated Multimedia

Refer to various GStreamer pipelines documented at Multimedia chapter.

5.2.4. Graphics and Display

Refer to various SGX 3D demos and other Graphics applications at Graphics & Display chapter.

5.2.6. Camera Users Guide

Introduction

This users guide provides an overview of camera loopback example integrated in matrix application on AM37x platform. This application can be executed by selecting the “Camera” icon at the top-level matrix.


../_images/Main_screen.png

Supported Platforms

  • AM37x

Camera Loopback Example

Camera example application demonstrates streaming of YUV 4:2:2 parallel data from Leopard Imaging LI-3M02 daughter card with MT9T111 sensor on AM37x EVM. Here, driver allocated buffers are used to capture data from the sensor and display to any active display device (LCD/DVI/TV) in VGA resolution (640x480).

Since the LCD is at inverted VGA resolution video buffer is rotated by 90 degree. Rotation is enabled using the hardware rotation engine called Virtual Rotated Frame Buffer (VRFB), VRFB provides four virtual frame buffers: 0, 90, 180 and 270. For statically compiled drivers, VRFB buffer can be declared in the bootloader. In this example, data is displayed through video1 pipeline and “omap_vout.vid1_static_vrfb_alloc=y” bootargument is included in the bootargs.

Sensor Daughter card

Smart sensor modules have control algorithms such as AE, AWB and AF are implemented in the module. Third parties like Leopard Imaging and E-Con Systems provide smart sensor solutions for OMAP35x/AM37x Rev G EVM and Beagle Board. SDK 5.02 integrates support for LI-3M02 daughter card which has MT9T111 sensor, the sensor is configured in 8-bit YUV4:2:2 mode. Leopard Imaging daughter card LI-3M02 can be ordered from their website.

For additional support on daughter cards or sensor details please contact the respective third parties

Contact: support@leopardimaging.com

Contact: Harishankkar

Connectivity

OMAP35xx/AM/DM37xx Signal Description Comment IO Type
cam_hs Camera Horizontal Sync   IO
cam_vs Camera Vertical Sync   IO
cam_xclka Camera Clock Output a Optional O
cam_xclkb Camera Clock Output b Optional O
cam_[d0:12] Camera/Video Digital Image Data   I
cam_fld Camera Field ID Optional IO
cam_pclk Camera Pixel Clock   I
cam_wen Camera Write Enable Optional I
cam_strobe Flash Strobe Contrl Signal Optional O
cam_global_reset Global Reset - Strobe Synchronization Optional IO
cam_shutter Mechanical Shutter Control Signa Optional O

LI-3M02 daughter card is connected to the 26-pin J31 connector on the EVM. 24MHz oscillator on the LI daughter card is used to drive the sensor device, alternatively camera clock output generated by processor ISP block can be used to drive the sensor pixel clock. Horizontal sync (HS) and vertical sync (VS) lines are used to detect end of line and field respectively. Sensor data line CMOS D4:D11 are connected to the processor D4:D11.

../_images/AM37xx_LI-3M02_Single_Camera_Approach.png

Connectivity of LI-3M02 daughter card to AM/DM37x

  • Note: H/V signal connectivity not required for ITU-R BT.656 input since synchronization signals are extracted from BT.656 bit stream

Media Controller Framework

SDK 5.02 includes support for media controller framework, for additional details of media controller framework and its usage please refer the PSP capture driver users guide

In order to enable streaming each links between each entity is established. Currently only MT9T111/TV5146 => CCDC => Memory has been validated.

Media: Opened Media Device
Enumerating media entities
[1]:OMAP3 ISP CCP2
[2]:OMAP3 ISP CCP2 input
[3]:OMAP3 ISP CSI2a
[4]:OMAP3 ISP CSI2a output
[5]:OMAP3 ISP CCDC
[6]:OMAP3 ISP CCDC output
[7]:OMAP3 ISP preview
[8]:OMAP3 ISP preview input
[9]:OMAP3 ISP preview output
[10]:OMAP3 ISP resizer
[11]:OMAP3 ISP resizer input
[12]:OMAP3 ISP resizer output
[13]:OMAP3 ISP AEWB
[14]:OMAP3 ISP AF
[15]:OMAP3 ISP histogram
[16]:mt9t111 2-003c
[17]:tvp514x 3-005c
Total number of entities: 17
Enumerating links/pads for entities
pads for entity 1=(0 INPUT) (1 OUTPUT)
[1:1]===>[5:0]  INACTIVE

pads for entity 2=(0 OUTPUT)
[2:0]===>[1:0]  INACTIVE

pads for entity 3=(0 INPUT) (1 OUTPUT)
[3:1]===>[4:0]  INACTIVE
[3:1]===>[5:0]  INACTIVE

pads for entity 4=(0 INPUT)

pads for entity 5=(0 INPUT) (1 OUTPUT) (2 OUTPUT)
[5:1]===>[6:0]  ACTIVE
[5:2]===>[7:0]  INACTIVE
[5:1]===>[10:0] INACTIVE
[5:2]===>[13:0] ACTIVE
[5:2]===>[14:0] ACTIVE
[5:2]===>[15:0] ACTIVE

pads for entity 6=(0 INPUT)

pads for entity 7=(0 INPUT) (1 OUTPUT)
[7:1]===>[9:0]  INACTIVE
[7:1]===>[10:0] INACTIVE

pads for entity 8=(0 OUTPUT)
[8:0]===>[7:0]  INACTIVE

pads for entity 9=(0 INPUT)

pads for entity 10=(0 INPUT) (1 OUTPUT)
[10:1]===>[12:0]        INACTIVE

pads for entity 11=(0 OUTPUT)
[11:0]===>[10:0]        INACTIVE

pads for entity 12=(0 INPUT)

pads for entity 13=(0 INPUT)

pads for entity 14=(0 INPUT)

pads for entity 15=(0 INPUT)

pads for entity 16=(0 OUTPUT)
[16:0]===>[5:0] ACTIVE

pads for entity 17=(0 OUTPUT)
[17:0]===>[5:0] INACTIVE

Enabling link [MT9T111]===>[ccdc]
[MT9T111]===>[ccdc]     enabled
Enabling link [ccdc]===>[video_node]
[ccdc]===>[video_node]  enabled

Capture: Opened Channel
successfully format is set on all pad [WxH] - [640x480]
Capture: Capable of streaming
Capture: Number of requested buffers = 3
Capture: Init done successfully


Display: Opened Channel
Display: Capable of streaming
Display: Number of requested buffers = 3
Display: Init done successfully

Display: Stream on...
Capture: Stream on...

AM/DM37x ISP Configuration

ISP CCDC block should be configured to enable 8-bit YUV4:2:2 parallel data input, the registers below provide details of ISP and CCDC registers in this mode.

ISP Registers:

ISP_CTRL: 0x480BC040
29C14C
ISP_SYSCONFIG: 0x480BC004
2000
ISP_SYSSTATUS: 0x480BC008
1
ISP_IRQ0ENABLE: 0x480BC00C
811B33F9
ISP_IRQ0STATUS: 0x480BC010
0
ISP_IRQ1ENABLE: 0x480BC014
0
ISP_IRQ1STATUS: 0x480BC018
80000300

CCDC Registers:

CCDC_PID: 0x480BC600
1FE01
CCDC_PCR
0
CCDC_SYN_MODE: 0x480BC604
31000
CCDC_HD_VD_WID: 0x480BC60C
0
CCDC_PIX_LINES: 0x480BC610
0
CCDC_HORZ_INFO: 0x480BC614
27F
CCDC_VERT_START: 0x480BC618
0
CCDC_VERT_LINES: 0x480BC61C
1DF
CCDC_CULLING: 0x480BC620
FFFF00FF
CCDC_HSIZE_OFF: 0x480BC624
500
CCDC_SDOFST: 0x480BC628
0
CCDC_SDR_ADDR: 0x480BC62C
1C5000
CCDC_CLAMP: 0x480BC630
10
CCDC_DCSUB: 0x480BC634
40
CCDC_COLPTN: 0x480BC63
0
CCDC_BLKCMP: 0x480BC63C
0
CCDC_FPC: 0x480BC640
0
CCDC_FPC_ADDR: 0x480BC644
0
CCDC_VDINT: 0x480BC648
1DE0140
CCDC_ALAW: 0x480BC64C
0
CCDC_REC: 0x480BC650
0
CCDC_CFG: 0x480BC65
8800
CCDC_FMTCFG: 0x480BC658
0
CCDC_FMT_HORZ: 0x480BC65C
280
CCDC_FMT_VERT: 0x480BC660
1E0
CCDC_PRGEVEN0: 0x480BC684
0
CCDC_PRGEVEN1: 0x480BC688
0
CCDC_PRGODD0: 0x480BC68C
0
CCDC_PRGODD1: 0x480BC690
0
CCDC_VP_OUT: 0x480BC694
3BE2800
CCDC_LSC_CONFIG: 0x480BC698
6600
CCDC_LSC_INITIAL: 0x480BC69C
0
CCDC_LSC_TABLE_BA: 0x480BC6A0
0
CCDC_LSC_TABLE_OF: 0x480BC6A4
0

5.2.7. WLAN and Bluetooth

Introduction

This page is a landing page for the entire set of WLAN and Bluetooth Demos available for the WL127x. Many of the demos are platform-agnostic, others apply specifically to a single platform.

The WL127x’s dual mode 802.11 b/g/n and Bluetooth transceiver gives users a robust selection of applications. A list of some basic use cases preloaded on the EVMs can be seen below:


Senario Description
Bluetooth A2DP profile Play a *.wav music file from the EVM on a stereo headset
Bluetooth OPP profile Send a *.jpg image from the EVM to a cellular phone via OPP profile
Bluetooth FTP profile Sends a text file from the EVM to a PC via FTP profile
Wireless LAN ping Connect to an Access Point and perform a ping test
Wireless LAN throughput Test UDP downstream throughput using the iPerf tool
Web browsing through the WLAN Browse the web over WLAN using a PC connected to the EVM Ethernet port
Bluetooth and WLAN coexistence Play a *.wav music file from the EVM on a stereo headset while browsing the web over WLAN

Bluetooth Demos

Classic Bluetooth

Bluetooth Low-Energy (BLE)


WLAN Demos

First Time

If running the WLAN demos on an EVM for the first time, it is recommended that you first complete the two steps below:

  • Step 1: Calibration – Calibration is a one-time procedure, performed before any WLAN operation. Calibration is performed once after the board assembly, or in case the 12xx connectivity daughtercard or EVM are replaced (refer to Calibration Process).

You may refer to Linux Wireless Calibrator page for more instruction.

  • Step 2: MAC address settings - This is a one-time procedure, done after the calibration, and before any WLAN operation is performed (refer to: <modifying WLAN MAC address>)

WLAN Station Demos

WLAN Soft AP Demos

WLAN - WiFi Direct Demos

Miscellaneous WLAN Demos


Miscellaneous Demos

Regulatory Domain

5.2.8. Hands on with QT

Introduction

This lab is going to give you a hands on tutorial on QT, the GUI devolpment tool, which is a component of the Sitara SDK. Each of the following sections below will walk you through a particular Lab exercise, the key points to take away, and the step-by-step instructions to complete the lab. The labs in this section will utilize both a command line build approach and a IDE approach using QT Creator.

Training: - You can find the necessary code segments embedded in the this article. You can cut and paste them from the wiki as needed.

Hands On Session: - In the vmware image, there are cheater files: /home/sitara/QT_Lab You can cut and paste section of code from this file rather than have to type them in. You can find a copy here, just right click and select “Save Target As”: File:QT Lab.tar.gz

NOTE: In this guide commands to be executed for each step will be marked in BOLD

Lab Configuration

The following are the hardware and software configurations for this lab. The steps in this lab are written against this configuration. The concepts of the lab will apply to other configurations but will need to be adapted accordingly.

Hardware

  • AM335x EVM-SK (TMDSSK3358) - Order Now

    NOTE All Sitara boards with an LCD display or HDMI/DVI out can be used for these labs, but the steps below related to serial and ethernet connections may differ.

  • Router connecting AM335x EVM-SK and Linux Host

  • USB cable connection between AM335x EVM-SK and Linux Host using the micro-USB connector (J3/USB0)

    NOTE The AM335x EVM uses a standard DB9 connector and serial cable. New Win7 based Host PCs may require a USB-to-Serial cable since newer laptops do not have serial ports. Additionally, since the serial connection is not required for these labs, you can telnet or ssh into the target over ethernet instead of using a serial terminal for target interaction.

  • 5V power supply (typically provided with the AM335x EVM-SK)

Software

  • A Linux host PC configured as per the Linux Host Configuration page
    • **PLEASE NOTE** currently the Linux Host Configuration page is under revision. Please download Ubuntu 14.04 rather than Ubuntu 12.04 at this link https://releases.ubuntu.com/14.04/
  • Sitara Linux SDK installed. This lab assumes the latest Sitara Linux SDK is installed in /home/sitara. If you use a different location please modify the below steps accordingly.
  • SD card with Sitara Linux SDK installed.
  • QT Creator 5.7.0 installed on your Linux host.
    • You can download Qt Creator from open source distribution version of Qt https://download.qt.io/official_releases/qt/5.7/5.7.0/
    • QT will download as a .run file. Make the file executable by running the chmod + <qtfile> command and ./<qtfile>. These steps should launch the QT installer.
    • Extract the package. Qt creator will be under Tools/Qt5.7.0 folder. In some cases it may also be under the opt folder. Type cd /opt/Qt5.7.0 into the command line to locate the file.
    • The labs in this wiki page is validated with Ubuntu 14.04 on 64 bit machine.

Supported Platforms and revisions

The following platforms are system tested to verify proper operation.

  • am335x
  • am37x
  • am35x
  • am180x
  • BeagleBoard XM

This current version has been validated on ti-processor-sdk version 02.00.01.07 using QT Creator 3.6.0

LAB 1: Hello World Command Line

Description

This LAB is optional, it introduces where to find QT components and build tools in the Sitara SDK. Approximate time to complete this LAB: 15 minutes. This section will cover the following topics

  • Introduction to build tools
  • enviroment setup script
  • The QT component of the Sitara SDK
    • where to find things in the Sitara SDK

Key Points

  • Where in the SDK to find the build tools
  • Where in the SDK to find the QT components
  • How to setup your build environment
  • How to utilize the above points to create a Hello World application.

Lab Steps

  1. Connect the cables to the EVM. For details on where to connect these cables see the Quick Start Guide that came with your EVM.

    1. Connect the Serial cable to provide access to the console.
    2. Connect the network cable
    3. Insert the SD card into the SD connector
    4. Insert the power cable into the 5V power jack
  2. Power on the EVM and allow the boot process to finish. You will know when the boot process has finished when you see the Matrix application launcher on the LCD screen

    NOTE You may be required to calibrate the touchscreen. If so follow the on screen instructions to calibrate the touchscreen.

  3. Open a terminal window on your Linux host by double clicking the Terminal icon on the desktop

  4. The cross-compiler is located in the linux-devkit/bin directory of the SDK installation directory. In the terminal window enter the following commands, replacing the <machine> and <sdk version> fields with the target machine you are using and the SDK version installed.

    NOTE You can use TAB completion to help with this

    • cd /home/sitara/AM335x/ti-processor-sdk-linux-<machine>-<sdk version>/linux-devkit/sysroots/x86_64-arago-linux/usr/bin
    • ls
  5. You should see a listing of the cross-compile tools available like the one below.

../_images/Sitara-linux-training-cross-tools-1.png
  1. To locate the pre-built ARM libraries perform the following commands:
    • cd /home/sitara/AM335x/ti-processor-sdk-linux-<machine>-<sdk version>/linux-devkit/sysroots/cortexa8hf-vfp-neon-linux-gnueabi/usr/lib
    • ls
  2. You should now see a listing of all the libraries (some are contained within their individual sub-directories) available as pre-built packages within the SDK.
  3. Now list only the QT libraries from the same directory by listing all libs starting with libQt.
    • ls libQt*
  4. You should see a listing of QT related libraries that can be used to build and run QT projects.
../_images/Sitara_Linux_QT_library_listings_1.png
  1. You can also find out where the QT header files are located. At the directory below are sub directories full of QT header files.
    • cd /home/sitara/AM335x/ti-processor-sdk-linux-<machine>-<sdk version>/linux-devkit/sysroots/cortexa8hf-vfp-neon-linux-gnueabi/usr/include/qt5
    • ls
  2. In order to make it easier to perform cross-compilations and ensure linking with the proper cross-compiled libraries instead of the host system libraries the environment-setup script has been created in the linux-devkit directory. This script will configure many standard variables such as CC to use the cross-compile toolchain, as well as adding the toolchain to your PATH and configuring paths for library locations. To utilize the setting provided by the environment-setup script you will need to source the script. Perform the following commands to source the environment-setup script and observe the change in the QMAKESPEC variable:
    • echo $QMAKESPEC
    • source /home/sitara/AM335x/ti-processor-sdk-linux-<machine>-<sdk version>/linux-devkit/environment-setup
    • echo $QMAKESPEC
  3. You should see the changes that were applied by executing the setup script.
../_images/Sitara_Linux_QT_environment_setup_script.jpeg
  1. You should have observed that the QMAKESPEC variable now contains the path to the QMAKESPEC files. Additionally your compile tools were added. There was also another change that occurred which was that your standard prompt changed from sitara@ubuntu to [linux-devkit]. The purpose of this change is to make it easy to identify when the environment-setup script has been sourced. This is important because there are times when you DO NOT want to source the environment-setup script. A perfect example is when building the Linux kernel. During the kernel build there are some applications that get compiled which are meant to be run on the host to assist in the kernel build process. If the environment-setup script has been sourced then the standard CC variable will cause these applications to be built for the ARM, which in turn will cause them to fail to execute on the x86 host system.

  2. As mentioned above sometimes it is not appropriate to source the environment-setup script, or you only want to source it during a particular build but not affect your default environment. The way this is done in the SDK is to source the environment-setup script inside of the project Makefile so that it is used only during the build process.

  3. Take a look at the enviroment setup file to see what all is going on there. Look through file to see where the compile tools variables such as CC and CPP and PATH are defined.

    • gedit /home/sitara/AM335x/ti-processor-sdk-linux-<machine>-<sdk version>/linux-devkit/environment-setup
  4. It is now time to build a Hello World project using QT. You need to create two files: helloworld.cpp and helloworld.pro

    • mkdir /home/sitara/AM335x/ti-processor-sdk-linux-<machine>-<sdk version>/example_applications/helloworld

    • cd /home/sitara/AM335x/ti-processor-sdk-linux-<machine>-<sdk version>/example_applications/helloworld

    • gedit helloworld.cpp and add the following code

      IMPORTANT You can find pre-written files in the in the /home/sitara/sitara-training-helper-files/QT_Lab/lab1 directory. You can just copy those files to your directory instead of typing the contents if you want to

      `` #include <QApplication>`` #include <QLabel> int main(int argc, char **argv) { QApplication app(argc,argv); QLabel label(“Hello World”); label.show(); return app.exec(); }

    • gedit helloworld.pro and add code

      IMPORTANT You can find pre-written files in the in the /home/sitara/sitara-training-helper-files/QT_Lab/lab1 directory. You can just copy those files to your directory instead of typing the contents if you want to

      QT += core gui widgets SOURCES += helloworld.cpp

  5. Now lets use qmake to create a Makefile

    • qmake helloworld.pro
  6. Notice how qmake automatically generated a Makefile for us, now lets build.

    • make
  7. Notice the build is using our cross-compiler-arm-linux-gnueabihf-g++

../_images/Sitara_Linux_QT_make_using_cross_compile.jpeg
  1. Also notice we now have an executable, lets see what type of file we created

    • file helloworld
  2. You should see something similar to the following: helloworld: ELF 32-bit LSB executable, ARM, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.31, BuildID[sha1]=0x8569a0956d8efffcfde68fca5c883be5fa4f1c31, not stripped

  3. Finally lets copy the helloworld over to our target file system and run it.

    • If you have not already done so connect with minicom and type ifconfig to find your target’s ip address

      NOTE You can also get your ip address from Matrix if it is running. Select Settings->Network Settings

    • On your Linux host console issue the command scp -r helloworld root@xx.xx.xx.xx:/home/root replacing the xx.xx.xx.xx below with you target’s ip address.

    • When asked for password, just hit return

    • Type yes when asked if you would like to continue

    • Move back over to your minicom window and execute it. You should find the helloworld in your default /home/root directory on the target.

      • ./helloworld
  4. You should see helloworld print on the LCD panel of your target board.

LAB 2: QT Creator Hello World

Description

This section will cover setting up QT Creator the integrated development environment. We start to investigate how QT Creator aids in rapid GUI development.

Key Points

  • Setting up QT Creator to find your tools
  • Setting up QT Creator to communicate with the target platform
  • Creating hello world and run using QT Creator.

Lab Steps

  1. Source the enviroment setup file to ensure all the paths are setup correctly. This was done in the previous section. If you already see [linux-devkit]: as your prompt then you can skip this step.

    • source /home/sitara/AM335x/ti-processor-sdk-linux-<machine>-<sdk version>/linux-devkit/environment-setup
  2. Bring up Qt Creator

    • ./home/sitara/Qt5.7.0/Tools/QtCreator/bin/qtcreator

      IMPORTANT By bringing QT Creator up manually, you will pass in the environment setup. If you double click on the Qt Creator Icon from the Desktop, you will not have the enviroment setup correctly and your lab will not work later on.

  3. QT creator should be up and running now

../_images/Sitara_Linux_QT_qtcreator.png
  1. Now lets setup QT creator to configure qmake. From the QT creator main menu shown above select the following:

    • Tools -> Options...
    • On the left side vertical menubar click Build & Run
    • Click the Qt Versions tab under Build & Run
    • Remove any versions that may already exist to make sure you start with a clean configuration
    • Click Add... on the right
    • Navigate to /home/sitara/AM335x/ti-processor-sdk-linux-<machine>-<sdk version>/linux-devkit/sysroots/x86_64-arago-linux/usr/bin/qt5
    • Select qmake then click on Open
    • Double click on Version Name and give the Qt Version a descriptive name such as QT 5.5 Sitara See image below.
    ../_images/Sitara_Linux_QT_options.jpeg

    IMPORTANT Notice there is a red ! icon. Don’t worry, lets add in the toolchain next and it should change to yellow.

    • Click Apply to save your changes
  2. Now we will setup the toolchain

    • Click the Compiler tab under Build & Run
    • Click Add in the top right and add a GCC
    • Change the name to arm-linux-gnueabihf-gcc. This can be done by editing the “Name” field.
    • For Compiler Path select Browse
      • Navigate to /home/sitara/AM335x/ti-processor-sdk-linux-<machine>-<sdk version>/linux-devkit/sysroots/x86_64-arago-linux/usr/bin
      • Select arm-linux-gnueabihf-gcc and click on open
      • Make sure to click on Apply to save your changes.
../_images/Sitara-compilerAndDebugger.jpeg
  1. Next, let’s setup the Debuggers.
    • Click the Debuggers tab under Build and Run
    • Click Add in the top right
    • Change the name to GDB Engine. This can be done by editing the “Name” field.
    • For Debugger Path select Browse
      • Navigate to /home/sitara/AM335x/ti-processor-sdk-linux-<machine>-<sdk version>/linux-devkit/sysroots/x86_64-arago-linux/usr/bin
      • Select arm-linux-gnueabihf-gdb and click on open
      • Make sure to click on Apply to save your changes.
../_images/Sitara-Debugger.png
  1. Click the Kits tab under Build & Run
    • Change the name to give the device a unique name: AM335x EVM
    • Select Device type Generic Linux Device instead of Desktop.
    • Select Compiler arm-linux-gnueabihf-gcc instead of the host gcc.
    • For Debugger select GDB Engine.
    • For QT Version select Qt 5.5 Sitara
    • Click Apply to register the options.
../_images/Sitara-linux-kits.png
  1. Now let’s setup our Target. While still in the Tools -> Options menu

    • On the left side of the window, select the Devices tab
    • In Devices: click the Devices tab
    • Click Add... in the top right
    ../_images/Sitara_Linux_QT_options_add_device.png
    • Select Generic Linux device and click on Start Wizard
    ../_images/Sitara_Linux_QT_Device_Configuration_Wizard_Selection.jpeg
    • The Device Configuration Wizard Selection Dialog box comes up

      • Type in the name of the Device: AM335x EVM

      • Type in the IP address of the Embedded Linux Device. Type the IP address for your board, not the one shown in the screen capture.

        NOTE This is the same IP address you obtained in the previous lab

      • For Username type in root (Most Texas Instruments Boards have this username)

      • Make sure Authentication type is Password, but leave the password field blank.

      • Click Next

      ../_images/Sitara_Linux_options_Generic_Linux_Device_Configuration_Setup.jpeg
    • Click Finish. You should see that the target test passed, so you can close that window.

../_images/Sitara_target_test.png
  1. Now we need to setup an SSH key so that the host can communicate with the target

    • Still under the Devices tab click Create New for Private key file
    ../_images/Sitara_Linux_QT_create_new_ssh_key.png
      • Key algorithm RSA
      • Select Key size: 1024
      • Then click Generate and Save Key Pair...
    ../_images/Sitara_Linux_QT_options_SSH_Key_Configuration.jpeg
      • Click Do not Encrypt key file
    ../_images/Sitara_Linux_QT_Password_for_Private_Key.jpeg
      • Just use the default name qtc_id.pub and Click Save and Click Close to close the Generate SSH Key window.
    • Under the Devices tab now click Deploy Public Key...
    ../_images/Sitara_Linux_QT_Deploy_Public_Key.png
      • Select the file just generated (should be under /home/sitara/.ssh)

        IMPORTANT You may need to right click and select show hidden files

      • Select the file qtc_id.pub and click on Open, shortly a window should show up saying “Deployment finished sucessfully”

    ../_images/Sitara_successful_deploy.png
    • Close the window and Click OK to exit the Linux Devices Window.
../_images/Sitara_Linux_QT_ok_to_close_devices.png
  1. Now that we are setup lets create a project build it and run it on the host

    • Select File -> New File or Project
    • Then select Applications under projects then select QT Widgets Applicaton on the top center
    • Click on Choose
    ../_images/Sitara_Linux_QT_new_project.png
    • Type in the name of the project as terminal. We will be building on this project in the next section.
    • Change the Create in value to /home/sitara
    • Click on Next
    ../_images/Sitarea_Linux_Qt_project_terminal.png
    • Click on Next again
    • Type in terminal for the Class name
    • Click Next
    ../_images/Sitara_Linux_QT_new_terminal_props.png
    • Click Finish
  2. Now we’ve setup a new project let’s explore and add some code.

    • Click on Edit on the left hand menubar and look at the project files including terminal.pro, main.cpp, terminal.cpp and terminal.ui
    ../_images/Sitara-terminal-pro.jpeg
    • Under Forms, double click on terminal.ui This will bring up the widget editor.
    • Remove the menuBar where it says Type Here on the top of the ui
    • Right click on the menuBar and select Remove MenuBar
    • Use the same procedure to remove the statusBar at the bottom of the ui. It is not that easy to see, but it is there and blank.
    • Once again remove the ToolBar (mainToolBar). It is found at the top of the ui and is also hard to see.
    ../_images/RemoveMenubar.png
    • Find the label widget in the category of display widgets, left click and drag it on to the User Interface (UI).
    • Type Hello World!!! into the label widget and strech out the borders so you can see all the letters.
    ../_images/Sitara_hello_world_UI.png
  3. Now we need to check and update the build and run settings:

    • On the left side vertical menubar select Projects
    • Select the Build and Run tab and select Build under AM335x.
    • Uncheck Shadow build as shown in the screenshot below:
    ../_images/Sitara_Build_settings_1.png
    • Now under the AM335x select the Run tab
    • Under Method click Add and select Add and then Deploy to Remote Linux Host
    • However you see the <Remote path not set> error under the Run secton.
    ../_images/Sitara_deploy_remote.jpeg
    • To fix the <Remote path not set> error do the following:

      • Click on Edit on the left side vertical bar and click on terminal.pro

      • Add the two lines below to the bottom of terminal.pro as shown in the screen shot below

        IMPORTANT You can find pre-written files in the in the /home/sitara/sitara-training-helper-files/QT_Lab/lab2 directory. You can just copy those files to your directory instead of typing the contents if you want to

        target.path += /home/root INSTALLS += target

      ../_images/Sitara_add_target_loc.jpeg
      • Select File -> Save
    • Click on Projects on the left side vertical bar and you should now see the error is gone and replaced with /home/root/terminal

    • Now in the Run portion: Select Add -> terminal (on Remote Generic Linux Host)

    ../_images/Sitara_remote_host.jpeg
  4. Finally we are ready to run

    • Click the Green Arrow on the bottom left to run the project

      NOTE ti-processor-sdk-linux-<> version 02.00.00.00 and 02.00.01.07 has dropbear package that doesn’t let the QT Creator to deploy the built image on the target board. dropbear version 2015.71 fixes this problem and the prebuilt binary can be download from here. Replace the /usr/sbin/dropbearmulti on target board filesystem with the downloaded 2015.71 dropbearmulti binary. Change the copied file mode to executable. Restart the target board. QT Creator should be able to successfully deploy the binary now

    • If you receive the error ‘g++: Command not found’, navigate to tools>options>build and run>kits. Add “linux-oe-g++” to the “Qt mkspec” text box

    ../_images/CompileErrorFix.PNG
    • Save all files if asked
    ../_images/Sitara-linux-Terminal-hello.jpeg
  5. Extra Investigation:

    • From minicom: run top on the target while helloworld is running. Check out CPU utilization and memory utilization for this simple app.
    • See how much memory is used by helloworld by itself, you may need to kill matrix /etc/init.d/matrix-gui-2.0 stop

LAB 3: Terminal project

Description

This section shows how you can use QT Creator to create a GUI from scratch.

Key Points

  • Adding widgets to a an User Interface (ui).
  • Adding code to make the widgets do something meaningful.

Lab Steps

  1. We will continue on from the previous lab using the terminal project you created. First we will remove the Hello world widget and resize the ui.

    • Click terminal.ui to bring up design mode.
    • Click the Hello World widget, and delete it making the ui empty and clean
  2. This next action is mainly for those with small displays, but will not adversely affect larger displays.

    • Select the entire ui as shown below.
    • Edit the Geometry values to Width = 440 and Height = 230 as shown.
    ../_images/Sitara-Linux_QT_Resize_screen.png
  3. Next we will add the Tab Widget. Just like the label widget, drag it over to the ui.

    ../_images/Sitara_tab_widget.png
    • Select the tab widget layout. Currently, the tab widget is part of our ui, but it is just sitting at fixed location where we dragged it.
      • On the upper right side right click on the terminal QWidget and select Lay Out -> Lay Out Vertical as shown below
    ../_images/Sitara_layout_vertically.png
    • Now the tab widget should completely fill the ui.
  4. Now let’s ad

    • Two Push Button Widgets
    • One Text Browser widget
    • One Line Edit widget.
      • Drag all of them up to the ui
    • Now lets set the TabWidget layout like we did with the terminal widget
      • Right click on the upper right QtabWidget -> Lay Out -> Lay Out in a Grid
      • Move them around so they look somewhat like the screen shot below
../_images/Sitara_ui_layout.jpeg
  1. Lets Rename the Push Button widgets.
    • Double click on the PushButton text in the ui
    • Edit the upper push button to say Send CMD
    • Edit the lower push botton to say Exit
    • Depending on how the grid layout worked for you, lets stretch out the Text Browser widget and the bottom Push Button widget to take up the whole screen horizontally if needed.
      • Just click on the widget and drag the border to fill the screen See screen shot below:
../_images/Sitara_adjust_widths.png
  1. Now lets give our widget objects a unique name.
    • Select the Text Browser widget
    • Go over to properties on the bottom right and edit ObjectName
      • Add the text _linuxshell to the end of the textBrowser name as shown below:
../_images/Sitara_rename_objects.jpeg
  1. Now create unique names for the other 3 widgets.

    • For lineEdit: lineEdit_commandline
    • For the Send CMD push button: pushButton_sendcmd
    • For exit push button: pushButton_exit
  2. We are not done yet, but for fun lets run this application and see what it looks like on the target.

    • Push the Green Arrow at the bottom left to launch on the target. Save all files if asked.

      IMPORTANT You can not start a new application on the target if your previous one is still running. To exit, push the “X” on the menubar at the top right of your target.

      NOTE It should appear just as we designed it, but pushing the buttons has no effect because we haven’t added any code yet.

  3. Now we are going to add code to make the buttons do what we wish them to do.

    • Right click on the Exit widget -> Go to slot
    ../_images/Sitara_goto_slot.jpeg
    • In the Go to Slot selector, select the first selection clicked() and hit OK
  4. Notice this pops you over to your terminal.cpp file where some code has been automatically added for you.

    IMPORTANT The code additions below can also be found in the /home/sitara/sitara-training-helper-files/QT_Lab/lab3 directory and can be copied into your project

    • Add the following line of code to on_pushButton_exit_clicked() qApp->quit();
    ../_images/Sitara_pushbutton.png
  5. Now repeat the same process you did for the exit button on the send CMD button. We will add code to control that button press.

    NOTE You will need to go back to the ui file to do this

    • Right click on the Send CMD widget -> Go to slot
    • In the Go to Slot selector, select the first selection clicked() and hit OK
    • Add the following line at the top of terminal.cpp to support QProcess. #include <QtGui>
    • Add the following code to on_pushButton_sendCmd_clicked() QString LinuxTexttoSend = ui->lineEdit_commandline->text(); // QProcess used to binaries in /usr/bin QProcess process; // Merge Channels so the output of binaries can be seen process.setProcessChannelMode(QProcess::MergedChannels); // Start whatever command is in LinuxTexttoSend process.start(LinuxTexttoSend, QIODevice::ReadWrite); // Run the command and loop the output into a QByteArray QByteArray data; while(process.waitForReadyRead()) data.append(process.readAll()); ui->textBrowser_linuxshell->setText(data.data());
    ../_images/Sitara_SendCMD_code.png
  6. Finally since we don’t have a keyboard to type a command lets add a predefined command to our line Edit Widget like shown below:

    • Double click on the line edit and add the text: date –help
    ../_images/Sitara_add_command.png
  7. Now run, you should see interaction with the Linux shell when you push sendCMD.

LAB 4: Enhancing the project with a web viewer, soft keyboard, and Style Sheets

Description

In this section we Enhance our GUI with a web browser, soft keyboard and style sheets.

Key Points

  • Adding a Web view.
  • Adding a softkeyboard.
  • How to adjust the look and feel

Lab Steps

  1. One of the first things we did in the Terminal Lab was to add a Tab widget which is a container widget. So far we added a Linux shell terminal to Tab 1, now lets add a Web View widget to Tab 2

    • From the terminal.ui, click on Tab 2 and notice it is empty.

      • Drag over a QWebView widget to Tab 2

      • Set the Layout of Tab 2 to a vertical layout

        NOTE Do you recall how we did this on the Terminal Lab? On the top right, right click tabWidget -> Lay Out -> Lay Out Vertically

    • When complete with the above steps, it should look like the following:

    ../_images/Sitara_webview.jpeg
  2. Now we can add a default URL. Since we are not connected to the internet, lets bring up matrix since it is running on a local server.

    • Select the WebView widget and on the bottom right find the url property of QWebView near the bottom of the list.

    • Type in: http://localhost

      ../_images/Sitara_default_url.png

      NOTE Notice how the Webview in your ui tries to display the webpage but can’t since it is not local to your host. Some people see this error and some do not.

  3. Now we need to add the webkit libraries to our project.

    • Go to Edit mode and bring up the terminal.pro file
    • Add webkitwidgets as shown below
    ../_images/Sitara_webkitwidgets.png
  4. Give it a try and run it, you should see the Matrix displayed.

    IMPORTANT You will need to use the Exit button on Tab1 to close this program

  5. Now lets address a couple of cosmetic issues. Notice how our new GUI does not fill the entire screen.

    • Change over to Edit’ mode and bring up main.cpp.

    • Find the line w.show()

      • Remove that line
      • type w. and notice how QT Creator will fill in all the possible options. Also notice that when you start to type it will jump the available options with the matching text.
      • Select w.showFullScreen(); see screen shot.
      ../_images/Sitara_fullscreen.png
  6. Now re-run and notice how it takes up the full screen.

    ../_images/Sitara_matrix.PNG

    <div style=”margin: 5px; padding: 5px 10px; background-color: #ffffec; border-left: 5px solid #ff6600;”>

    IMPORTANT You will need to use the Exit button on Tab1 to close this program

  7. Now lets fix another issue back on Tab 1. We hard coded in a default command: date –help

    • Since we did not provide a keyboard, lets add a soft keyboard.

      • Download a keyboard class from this location: Qt Keyboard Template wiki. These instruction assume you downloaded the tarball to the /home/sitara directory.

        IMPORTANT If you are using a TI laptop or followed the host configuration steps you can find these files in the /home/sitara/sitara-training-helper-files/QT_Lab/keyboard directory and can skip these steps

        • cd /home/sitara
        • tar -xzvf Keyboard.tar.gz
      • Copy the keyboard files to your terminal project directory

        • cd /home/sitara/terminal/
        • cp -rf <keyboard extraction directory>/keyboard .
    • Now lets add keyboard into our project.

      • Go to Edit mode and right click on terminal -> Add Existing Files as shown below.
      ../_images/Sitara_addexisting.png
      • Navigate to the keyboard directory /home/sitara/terminal/keyboard and add all 4 files in that directory.

        ../_images/Sitara_addkeyboard.png

        NOTE Notice how all four keyboard files are now part of the Terminal project. Click on the keyboard.ui and take a look. It is made up mainly of QPushButtons and one QLineEdit and layout controls

    • Now we need to hook in the keyboard to the terminal GUI.

      IMPORTANT As always you can find copy that you can copy into your project in the /home/sitara/sitara-training-helper-files/QT_Lab/lab4 directory

      • Add some code to terminal.h

        • At the top of the file add #include "keyboard/keyboard.h"
        • In private slots: add void open_keyboard_lineEdit();
        • In the section private: add Keyboard *lineEditkeyboard;
        ../_images/Sitara_terminal_h.png
      • Now add some code to terminal.cpp

        • In the function terminal::terminal lineEditkeyboard = new Keyboard(); connect( ui->lineEdit_commandline ,SIGNAL(selectionChanged()),this ,SLOT(open_keyboard_lineEdit()));
        • Also add the function below to the bottom of terminal.cpp void terminal::open_keyboard_lineEdit() { QLineEdit *line = (QLineEdit *)sender(); lineEditkeyboard->setLineEdit(line); lineEditkeyboard->show(); }
../_images/Sitara_terminal_cpp.jpeg
  1. You are now ready to run your code.

    • Run and verify when you touch the line edit widget, that the keyboard pops up.

      IMPORTANT Depending on your screen resolution you may need to double-tap the bar at the top of the keyboard to size it to full screen

  2. Type in a linux command such as ps to list the running processes and verify that you get back the expected results.

  3. Next lets add specific colors to the GUI components using style sheets.

    • Go back to your ui in the upper right corner: right click on the terminal widget -> Change styleSheet
    ../_images/Sitara_stylesheet.jpeg
    • Cut and paste from terminal sytle sheet settings at the end of this lab section to the Terminal stylesheet settings and Apply them

      IMPORTANT You can find this file in the /home/sitara/sitara-training-helper-files/QT_Lab/lab4/style_sheet_terminal.txt file

    • Do the same thing for the Tab Widget by cutting and pasting from the tab style sheet settings at the end of this Lab section

      IMPORTANT You can find this file in the /home/sitara/sitara-training-helper-files/QT_Lab/lab4/style_sheet_tab.txt file

  4. voila ... TI colors - your setup should now match the look and feel of the one below:

    ../_images/Sitara_tabStyle.jpeg
  5. Run it!

  6. Extra investigation: Run a debug session and set break points in keyboard.cpp. Notice how the each QPushbutton signals the keyboardHandler slot.

    • NOTE If breakpoints are not working for you, verify you have created a Debug version of terminal and not a Release version. Look under Projects and “Build Settings” and check Details under Build Steps.

terminal style sheet settings

QWidget {
    background-color: rgb(0, 0, 0);
}

QTabWidget::pane {
    position: absolute;
    border: 2px solid red;
}

QTabWidget::tab-bar {
    alignment: center;
}


QTabBar::tab {
    color: red;
    background-color: black;
    border: 2px solid red;
    border-radius: 0px;
    padding: 4px;
    margin-left: 0.25em;
    margin-right: 0.25em;
}

QTabBar::tab:selected, QTabBar::tab:hover {
    color: white;
    background: red;
}

QPushButton {
     /**font: bold 16pt;
     color: white ;

     border-image: url(:/pushblueup.png);
     background-color: transparent;
     border-top: 3px transparent;
     border-bottom: 3px transparent;
     border-right: 10px transparent;
     border-left: 10px transparent;**/
 }

tab style sheet settings

QWidget{
    background-color: red;
}

QTextBrowser{
    background-color: black;
    color: yellow;
}

QLineEdit{
    background-color: white;
    color: black;
}

QPushButton{
}

QWebView{
    background-color: white;
}

LAB 5: Exploring Existing Demos and Examples

Key Points

  • Exploring existing projects in the QT SDK.
  • Using a SGX accelerated QT Demo

Lab Steps

  1. In a console window on your host:

    • gedit /home/sitara/AM335x/ti-processor-sdk-linux-<machine>-<sdk version>/example-applications/matrix-gui-browser-2.0/main.cpp

      NOTE This is the QT application which displays matrix for all Sitara platforms. As you can see it uses a QWebView just like we did in the Terminal Enhancements Lab. The main differences are that you pass in the url as an argument, and all window framing was removed.

  2. Now try this one using the minicom connection to your target, it may surpise some of you:

    • cd usr/share/qt5/examples
    • We are now in the target Filesystem provided with Sitara SDK. Lets search for how many QT project files we can find.
      • find . -name *.pro
    • There are many QT project files here
      • find . -name *.pro | wc
    • Over 300 different projects already available in the SDK.
  3. Lets take a look at one specific example hellogl2. This is an SGX accelerate QT demo. In your minicom window do

    • cd /usr/share/qt5/examples/opengl/hellogl2
    • run this ./hellogl2
  4. You should see an SGX accelerated demo

  5. As mentioned there are many demos available. Some may not work due to how QT was configured when it was built.

  6. Some additional demo of interest:

    1. /usr/share/qt5/examples/webkitwidgets/browser – This is the broswer demo featured in matrix.
  7. Extra Excercise: Pull in one the the demos or examples into QT Creator by opening it as a project. Build it and run it on the target.

    NOTE You may need need to do some project setup to make sure you will run on the target

Debugging QT Libraries

For debugging QT application with source code to QT libraries, corresponding QT library will need to be installed to the <ti-processor-sdk-linux-xxx>/linux-devkit/sysroots/armv7ahf-neon-linux-gnueabi location. The *.ipk package can be found from the yocto build of PSDK under build/arago-tmp-external-linaro-toolchain/work/armv7ahf-neon-linux-gnueabi. Linux “find” command can be used to refine the search for *.ipk file. For example, following steps to debug qtbase application

1. Copy and install the associated ipk packages into the sysroot directory

  1. dpkg -x qtbase-dbg_xxx_armv7ahf-neon.ipk sysroots/armv7ahf-neon-linux-gnueabi/

2. Set sysroot in the QT Creator: Under Tools -> Options -> Debugger tab of the QT Creator, go to the GDB tab and add these additional startup commands, for example set sysroot /home/sitara/ti-processor-sdk-linux-xxx/linux-devkit/sysroots/armv7ahf-neon-linux-gnueabi set debug-file-directory /home/sitara/ti-processor-sdk-linux-xxx/linux-devkit/sysroots/armv7ahf-neon-linux-gnueabi/usr/lib/.debug


5.3. Application Demos

5.3.1. Dual Camera

Dual camera demo

Below diagram illustrates the video data processing flow of dual camera demo.

../_images/Dual_camera_demo.png

Dual camera example demo demonstrates following-

  1. Video capture using V4L2 interface from up to two cameras.
  2. QT QWidget based drawing of user interface using linuxfb plugin. linuxfb is fbdev based software drawn QPA.
  3. Hardware accelerated scaling of input video from primary camera to display resolution using DSS IP.
  4. Hardware accelerated scaling of input video from secondary camera (if connected) to lower resolution using DSS IP.
  5. Hardware accelerated overlaying of two video planes and graphics plane using DSS IP.
  6. Scaling of two video planes and overlaying with graphics plane happens on the fly in single pass inside DSS IP using DRM atomic APIs.
  7. Snapshot of the screen using JPEG compression running on ARM. The captured images are stored in filesystem under /usr/share/camera-images/ folder
  8. The camera and display driver shares video buffer using dmabuf protocol. Capture driver writes captured content to a buffer which is directly read by the display driver without copying the content locally to another buffer (zero copy involved).
  9. The application also demonstrates allocating the buffer from either omap_bo (from omapdrm) memory pool or from cmem buffer pool. The option to pick omap_bo or cmem memory pool is provided runtime using cmd line.
  10. If the application has need to do some CPU based processing on captured buffer, then it is recommended to allocate the buffer using CMEM buffer pool. The reason being omap_bo memory pool doesn’t support cache read operation. Due to this any CPU operation involving video buffer read will be 10x to 15x slower. CMEM pool supports cache operation and hence CPU operations on capture video buffer are fast.
  11. The application runs in nullDRM/full screen mode (no windows manager like wayland/x11) and the linuxfb QPA runs in fbdev mode. This gives applicationfull control of DRM resource DSS to control and display the video planes.

Instructions to run dual camera demo

  • Since the application need control of DRM resource (DSS) and there can be only one master, make sure that the wayland/weston is not running.

#/etc/init.d/weston stop

  • Run the dual camera application

#dual-camera -platform linuxfb <0/1>

  • When last argument is set as 0, capture and display driver allocates memory from omap_bo pool.
  • When it is set to 1, the buffers are allocated from CMEM pool.

5.3.2. Video Analytics

Overview

The Video Analytics demo shipped with the Processor SDK Linux for AM57xx showcases how a Linux Application running on Cortex A-15 cluster can take advantage of C66x DSP, 3D SGX hardware acceleration blocks to process a real-time camera input feed and render the processed output on display - all using open programming paradigms such as OpenCV, OpenCL, OpenGL, and Qt and standard Linux drivers for capture/display.

A hand gesture is detected using morphological operators from OpenCV library. The gesture is used to control a parameter (wind speed) used in physical simulation of water waves. Result of simulation is visualized in real-time via 3D rendering of water surface. The hardware IP blocks, such as IVAHD, M4 cores, are not utilized for this demo.

Setup

In order to re-create the demo, user would need a standard AM572x GP EVM and Processor SDK Linux package. The demo is integrated into the Matrix GUI and can be launched by touching the “Video Analytics” icon on the LCD. The sources to the demo are packaged in the Processor SDK Linux Installer, under “example-applications” folder and can be compiled using the SDK’s top-level Makefile

Building Blocks

  • Camera Capture: The camera module on AM572x GP EVM acquires color frames of 640x480 resolution. The images are received by the OpenCV framework using camera capture class, that depends on the standard V4L2 Linux driver (/dev/video1).
  • Gesture Recognition: The first 30 frames captured are used to estimate the background, and later subtracted to extract the hand contour, using erode and dilute operations available in the OpenCV Library. Analysis of largest contour is performed to find convex hulls. Hand gesture classification is based on ratio of outer and inner contour. Several discrete values are sent to wave simulation algorithm.
  • Wave Simulation: Wave simulation is done in two stages: calculation of initial (t=0)) Phillips spectrum followed by spectrum updates per each time instant. Height map is generated in both steps using 2D IFFT (of spectrum). The Gesture inputs are used to vary the wind direction for wave simulation
  • Graphics Rendering: Finally, 2D vertex grid is generated with the above height map. Fragment rendering uses user-defined texture and height dependent coloring.
  • Display: The displayed content includes - Camera feed, overlayed with the contour detection, and the generated water surface

Block diagram


../_images/Va_demo_diagram.png

Demo Internals

Programming Paradigm

  • Code running on A15 Linux platform is written in C++ and links with QT5, OpenCV and OpenCL host side libraries.
  • Code running on DSP C66x is written in OpenCL C
  • Code running on GPU is written in GLSL.
  • Standard V4L2 Linux driver is used for camera capture
  • The demo uses OpenCV 3.1, OpenCL 1.2, OpenGL ES 2.0/GLSL and QT 5.5

OpenCV (running on ARM A15s)

FLOW: camera color frames from V4L2 driver -> OpenCV based algorithms -> single float converted to 4 discrete values

Video analytics functionality includes simple algorithm to extract contours of hand, and detect how widely opened the hand is. A parameter which indicates ratio between convex hull and contours is used to control physical simulation of waves.This is an output argument converted to wind speed, in 2.5-10 ms range (4 discrete values). Initially image background is estimated (for 30 frames) using MOG2 (Mixture Of Gaussians), standard OpenCV algorithm. Later, the background is subtracted from camera frames, before morphological operations: erode and dilute are performed.Next two steps include contour detection (which is also presented in camera view window) and convex hull estimation. Ratio between outer and inner contour indicates (in 1.1-1.8 range) can be correlated with number of fingers and how widely are they spread. If they are more widely spread, parameter is converted to higher wind speed and waves become higher.

OpenCL (running on ARM A15s + C66x)

FLOW: 4 discrete values of wind speed (2.5, 4, 7, 10m/s)  -> OpenCL based algorithms -> 256x256 float height map matrix (-1, ..., +1 range of vertex heights)

Wave surface simulation is based on generation of Phillips (2D) spectrum followed by 2D IFFT (256x256). Both operations are executed by OpenCL offload to single DSP core. Many details on this algorithm can be found in graphics.ucsd.edu/courses/rendering/2005/jdewall/tessendorf.pdf. This stage is controlled by wave speed (output of gesture detection algorithm) using fixed wind direction (an idea for improvement: wind direction can be controlled using hand gesture).

Height map in form of 256x256 float matrix is generated on output and used by OpenGL based vertex renderer (performed in next step). Wave surface simulation consists of two steps:

  • initial setup defining starting conditions (wind speed and wind direction are used as input in this stage only)
  • update of wave surface height map (Phillips spectrum modification generated at t=0, along time axe and 2D IFFT for each time step).

OpenGL ES (running on ARM A15s + GPU/SGX)

FLOW: 256x256 float height map matrix + fixed texture -> OpenGL ES based algorithm -> rendered frame buffers

OpenGL ES is a subset of Open GL for desktop devices. Important difference (for this project) is requirement to use vertex buffers and only triangle strips. Also Qt provides wrapper functions (QOpenGL functions) created with intention to hide differences between different OpenGL versions and also to slightly simplify programming. On the downside, it involves Qt specific syntax (not far from original OpenGL ES functions). Height Map data (256x256, but sub-sampled by 4x4, hence 64x64 vertices) received from previous stage, are rendered specific coloring and user supplied JPEG image. Fragment shader does mixing of texture and color derived from interpolated height giving more natural visualization. Currently lighting effects are not used (Implementing this can significantly improve the quality of rendering).

QT 5 (running on ARM A15)

FLOW: user input (mouse and keyboard) -> QT application -> windowing system and control of above threads

qt-opencv-multithreaded (github.com/devernay/qt-opencv-multithreaded ) is skeleton GUI application providing camera input and some basic image processing algorithms. Additional algorithm (w.r.t baseline) is hand gesture which starts OpenCV thread and OpenCL simulation thread. Wave surface window (detached, 600x400) appears only if hand is put (at 2-3ft) in front of EVM camera. Intensity of waves is defined by how much fingers are spread.
Please wait for 2-3 seconds (after start of Gesture detection) before putting hand in front of camera, to allow good background estimation. Wave display window can be closed by pressing [x] in top right corner, and some other algorithm selected or gesture detection restarted (e.g. to estimate new background).

Directory Structure

The functionality is organized as shows in the files/description below.

  file name description
1 CameraConnectDialog.cpp/CameraConnectDialog.h  
2 CameraGrab.cpp/CameraGrab.h Auxilliary camera frame acquisition functions to achieve full FPS
3 CameraView.cpp/CameraView.h Major class instantiated after connecting to camera. This class creates processing thread, wavesimulation thread and also instantiates wave display (3D graphics) widget.
4 CaptureThread.cpp/CaptureThread.h Input (camera) frame buffering.
5 FrameLabel.cpp/FrameLabel.h  
6 GeometryEngine.cpp/GeometryEngine.h Height map mash creation, vertex updates
7 Gesture.cpp/Gesture.h Hand gesture (OpenCV) detection algorith,
8 ImageProcessingSettingsDialog.cpp/ImageProcessingSettingsDialog.h Settings of parameters used by image processing algorithms.
9 main.cpp main function
10 MainWindow.cpp/MainWindow.h  
11 MatToQImage.cpp/MatToQImage.h Conversion from OpenCV Mat object to QT QImage object
12 ProcessingThread.cpp/ProcessingThread.h Main processing thread, frame rate dynamics, invokes variois image processing algorithms
13 SharedImageBuffer.cpp/SharedImageBuffer.h  
14 WaveDisplayWidget.cpp/WaveDisplayWidget.h Wave surface 3D rendering (invokes methods from GeometryEngine.cpp)
15 WaveSimulationThread.cpp/WaveSimulationThread.h Wave surface physical simulation thread - host side of OpenCL dispatch.
16 Buffers.h  
17 Structures.h  
18 Config.h  
19 phillips.cl DSP OpenCL phillips spectrum generation kernels and 2D IFFT kernel (invoking dsplib.ae66 optimized 1D FFT). After compilation (by clocl) phillips.dsp_h is generated, and included in WaveSimulationThread.cpp (ocl kernels are compiled and downloaded in this thread, before run-time operation is started).
20 vshader.glsl Vertex shader (gets projection matrix, position and texture position as input arguments; generates texture coordinate and height for fragment shader
21 fshader.glsl Fragment shader doing linear interpolation of textures and mixing texture with height dependent color, and adds ambient light
22 shaders.qrc Specify shader filenames
23 textures.qrc Specify texture file (2D block which is linearly interpolated in fragment shader, using position arguments provided by vertex shader)
24 qt-opencv-multithreaded.pro Top level QT make file: phi
25 ImageProcessingSettingsDialog.ui User interface definition file for modification of algorithm parameters.
26 CameraView.ui Camera view user interface definition file - right click mouse action brings up image processing algorithm selection
27 CameraConnectDialog.ui  
28 MainWindow.ui  
































Performance

The hand gesture detection/wave surface simulation/wave surface rendering demo pipeline runs at 18-20 fps. For other algorithms (e.g. smoothing, canny) the pipeline runs at 33-35 fps.

Licensing

The demo code is provided under BSD/MIT License

FAQ/Known Issues

  • Brighter lighting conditions are necessary for noise-free camera input, to allow good contour detection. In poor lighting conditions, there would be false or no detection.
  • OpenCV 3.1 version shows low FPS rate for Camera Capture. Hence, a custom solution based on direct V4L2 ioctl() calls is adopted (cameraGrab.cpp file) to boost the FPS

5.3.3. DLP 3D Scanner

This demo demonstrates an embedded 3D scanner based on the structured light principle, with am57xx. More details can be found at https://www.ti.com/tool/tidep0076

5.3.4. People Tracking

This demo demonstrates the capability of people tracking and detection with TI’s ToF (Time-of-Flight) sensor. More details can be found at https://www.ti.com/lit/pdf/tidud06

5.3.5. Barcode Reader

Introduction

Detecting 1D and 2D barcodes on an image, and decoding those barcodes are important use cases for the Machine-Vision. Processor SDK Linux has integrated the following open source components, and examples to demonstrate both of these features.

  • Barcode detection: OpenCV
  • Barcode Decoder/Reader: Zbar Library

OpenCV includes python wrapper to allow quick and easy prototyping. It also includes support for OpenCL offload on devices with C66 DSP core (currently OpenCV T-API cannot be used with python wrapper).


Zbar Barcode Decoder/Reader

Recipes for zbar barcode reader have been added to build the zbar library and test binary. Zbar is standalone library, which does not depend on OpenCV. Current release is not accelerated via OpenCL dispatch (obvious candidates are zbar_scan_y() and calc_tresh() functions consuming >50% of CPU resources).

Command to run zbar test binary:

barcode_zbar [barcode_image_name]

Barcode Region Of Interest (ROI) Detection with OpenCV and Python

Detecting Barcodes in Images using Python and OpenCV provides python scripts which run with OpenCV 2.4.x. For use with Process SDK Linux which has OpenCV 3.1, modifications have been made to the original python scripts so that they can run with OpenCV 3.1. Below please find the modified python scripts detect_barcode.py.

# import the necessary packages
import numpy as np
import argparse
import cv2

# construct the argument parse and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--image", required = True, help = "path to the image file")
args = vars(ap.parse_args())

# load the image and convert it to grayscale
image = cv2.imread(args["image"])
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

# compute the Scharr gradient magnitude representation of the images
# in both the x and y direction
gradX = cv2.Sobel(gray, ddepth = cv2.CV_32F, dx = 1, dy = 0, ksize = -1)
gradY = cv2.Sobel(gray, ddepth = cv2.CV_32F, dx = 0, dy = 1, ksize = -1)

# subtract the y-gradient from the x-gradient
gradient = cv2.subtract(gradX, gradY)
gradient = cv2.convertScaleAbs(gradient)

# blur and threshold the image
blurred = cv2.blur(gradient, (9, 9))
(_, thresh) = cv2.threshold(blurred, 225, 255, cv2.THRESH_BINARY)

# construct a closing kernel and apply it to the thresholded image
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (21, 7))
closed = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel)

# perform a series of erosions and dilations
closed = cv2.erode(closed, None, iterations = 4)
closed = cv2.dilate(closed, None, iterations = 4)

# find the contours in the thresholded image, then sort the contours
# by their area, keeping only the largest one
(_, cnts, _) = cv2.findContours(closed.copy(), cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
c = sorted(cnts, key = cv2.contourArea, reverse = True)[0]

# compute the rotated bounding box of the largest contour
rect = cv2.minAreaRect(c)
box = np.int0(cv2.boxPoints(rect))

# draw a bounding box arounded the detected barcode and display the
# image
cv2.drawContours(image, [box], -1, (0, 255, 0), 3)
cv2.imshow("Image", image)
cv2.waitKey(0)

Command to run detect_barcode.py. Before running the python scripts, ensure that matrxi GUI has been stopped and weston is up running. With successful detection, the barcode image is displayed with a green bounding box on the barcode detected.
python detect_barcode.py --image [barcode_image_name]

Barcode Region Of Interest (ROI) Detection with OpenCV and CPP implementation

Current version of OpenCV (3.1) Python wrapper does not support T-API which is needed for OpenCL dispatch. So Processor SDK Linux is including the same algorithm implemented in CPP (https://git.ti.com/apps/barcode-roi-detection), which can be executed on ARM platform only, or with DSP acceleration. CPP example includes more options for various types of input and output, and run-time control of OpenCL dispatch.

This example allows multiple command line options:

  • Using static image (JPG or PNG) as input
  • Live display or static image output (JPG or PNG)
  • Use OpenCL offload or not

Target filesystem includes detect_barcode in “/usr/bin”, and test vector in “/usr/share/ti/image” folder. Again, after successful detection image with barcode in green bounding box is displayed or written to output file. Below are various use cases of detect_barcode.

  • Static image input, no opencl dispatch, live display: detect_barcode sample_barcode.jpg 0 1
  • Static image input, opencl ON, live display: detect_barcode sample_barcode.jpg 1 1
  • Static image input, opencl ON, file output: detect_barcode sample_barcode.jpg 1 image_det.png

Majority of workload is in following lines:

ocl::setUseOpenCL(ocl_acc_flag);  /* Turn ON or OFF OpenCL dispatch  */

cvtColor(im_rgb,im_gray,CV_RGB2GRAY);
im_gray.copyTo(img_gray);

Sobel( img_gray, gradX, CV_16S, 1, 0, -1, 1, 0, BORDER_DEFAULT ); /* Input is 8-bit unsigned, output is 16-bit signed */
Sobel( img_gray, gradY, CV_16S, 0, 1, -1, 1, 0, BORDER_DEFAULT ); /* Input is 8-bit unsigned, output is 16-bit signed */
subtract(gradX, gradY, gradient);
convertScaleAbs(gradient, abs_gradient);

// blur and threshold the image
//GaussianBlur( abs_gradient, blurredImg, Size(7,7), 0, 0, BORDER_DEFAULT );
GaussianBlur( abs_gradient, blurredImg, Size(3,3), 0, 0, BORDER_DEFAULT ); /* 3x3 kernel */
threshold(blurredImg, threshImg, 225, 255, THRESH_BINARY);

Mat elementKernel = getStructuringElement( MORPH_RECT, Size( 2*10+1, 2*3+1 ), Point(10, 3));
ocl::setUseOpenCL(false); /* Turn OFF OpenCL dispatch */
morphologyEx( threshImg, closedImg, MORPH_CLOSE, elementKernel );

ocl::setUseOpenCL(ocl_acc_flag);   /* Turn ON or OFF OpenCL dispatch  */
erode(closedImg, img_final, UMat(), Point(-1, -1), 4); /* erode, 4 iterations */
dilate(img_final, img_ocl, UMat(), Point(-1, -1), 4);  /* dilate, 4 iteration */
ocl::setUseOpenCL(false); /* Turn OFF OpenCL dispatch */

Not all OpenCV kernels can be dispatched to DSP via OpenCL. Please refer to OpenCV#OpenCL_C_C66_DSP_kernels for the list of kernels which are currently DSP accelerated.

In order to use OpenCL dispatch, it is necessary to:

  • Enable OpenCL use (by setting environment variables, and invoking ocl::setUseOpenCL(ocl_acc_flag))
  • Use T-API: e.g. replace Mat types with UMat types

5.3.6. EVSE Demos

This demo showcases Human Machine Interface (HMI) for Electric Vehicle Supply Equipment(EVSE) Charging Stations. More details can be found at https://www.ti.com/tool/TIDEP-0087

5.3.7. Protection Relay Demo

Matrix UI provides out of box demo to showcase Human Machine Interface (HMI) for Protection Relays. More details can be found at https://www.ti.com/tool/TIDEP-0102

5.3.8. Qt5 Thermostat HMI Demo

Qt-based Thermostat HMI demo

The morphing of devices like the basic thermostat into a breed of power smart thermostats has shown how the appliances in residences today must adapt, re-imagine and, in some cases, reinvent their role in the connected home of the future or risk being left behind. Thermostats in particular have come a long way since they were first created. There was a time when the mechanical dial thermostat was the only option. It was simple and intuitive to operate because what you saw was what you got. The user simply set the temperature and walked away. Unfortunately, it was not efficient. Energy and money could be wasted since its settings would only change when someone manually turned the dial to a new temperature. Modern thermostats provide a much richer Graphical Interface, with several features. Processor SDK now includes a Qt5 based thermostat with the following key features - that should easily enable customers to use as a starting point for further innovation.

  • Display three-day weather forecast, daily temperature range for the selected city from openweathermap.org
  • Display and adjust room temperature
  • Pop-up menu to select city, temperature format and set network proxy
  • Weekly temperature schedule
  • Away Energy saving mode

../_images/qt5-thermostat-Picture4.png

Figure1: three-day weather forecast


../_images/qt5-thermostat-Picture2.png

Figure2: Select City for weather forecasts


../_images/qt5-thermostat-Picture3.png

Figure3: Set Proxy for Internet connectivity


../_images/qt5-thermostat-Picture1.png

Figure4: Weekly temperature schedule


../_images/qt5-thermostat-Picture5.png

Figure5: Inside temperature, Away energy saving settings


The demo is hosted at https://git.ti.com/apps/thermostat-demo, and also the sources are located at

<SDK-install-dir>/example-applications/qt-tstat-2.0/

The code can be compiled, installed using top-level SDK Makefile

make qt-tstat-clean
make qt-tstat
make qt-tstat-install

5.3.9. Optical Flow with OpenVX

5.3.9.1. OpenVX Example Block Diagram

OpenVx tutorial example demontrates the optical flow based on image pyramids and Harris corner tracking. It builds upon TIOVX, and utilizes OpenCV for reading the input (from file or camera) and rending the output for display. Input frame from OpenCV invokes OpenVX graph, and the processing is done once per input frame. OpenVX defines the graph topology and all configuration details. All resources are allocated and initialzied before processing.

../_images/OpenVx-Example-Block-Diagram.png

5.3.9.2. Run OpenVx Tutorial Example

The binary for OpenVX tutorial example is located at /usr/bin/tiovx-opticalflow. It is a statically linked Linux application running on Arm.

Before running the tutorial example, download the test clip and copy it to file system (e.g. ~/tiovx). Then, execute the commands below to load OpenVx firmware and run the optical flow example.

reload-dsp-fw.sh tiovx                                       # load openvx firmware and restart dsps
tiovx-opticalflow /home/root/tiovx/PETS09-S1-L1-View001.avi  # Run tutorial example

Screen capture after running the optical flow example:

../_images/OpenVx-Example-Screen-Shot.png

Logs:

root@am57xx-evm:~# tiovx-opticalflow /home/root/PETS09-S1-L1-View001.avi
 VX_ZONE_INIT:Enabled
 VX_ZONE_ERROR:Enabled
 VX_ZONE_WARNING:Enabled
VSPRINTF_DBG:SYSTEM NOTIFY_INIT: starting
VSPRINTF_DBG: SYSTEM: IPC: Notify init in progress !!!

VSPRINTF_DBG:[0] DUMP ALL PROCS[3]=DSP1
VSPRINTF_DBG:[1] DUMP ALL PROCS[4]=DSP2
VSPRINTF_DBG:Next rx queue to open:QUE_RX_HOST

VSPRINTF_DBG:Just created MsgQue
VSPRINTF_DBG:Created RX task
VSPRINTF_DBG:Next tx queue to open:QUE_RX_DSP1, procId=3
VSPRINTF_DBG:Next tx queue to open:QUE_RX_DSP2, procId=4
VSPRINTF_DBG:Dump all TX queues: procId=3 name=QUE_RX_DSP1 queId=262272, msgSize=68, heapId=0
VSPRINTF_DBG:Dump all TX queues: procId=4 name=QUE_RX_DSP2 queId=196736, msgSize=68, heapId=0
VSPRINTF_DBG:SYSTEM: IPC: SentCfgMsg, procId=3 queuId=262272
VSPRINTF_DBG:SYSTEM: IPC: SentCfgMsg, procId=4 queuId=196736
VSPRINTF_DBG: SYSTEM: IPC: Notify init DONE !!!

VSPRINTF_DBG:>>>> ipcNotifyInit returned: 0

VSPRINTF_DBG:SYSTEM NOTIFY_INIT: done
OK: FILE /home/root/PETS09-S1-L1-View001.avi 768x480
init done
Using Wayland-EGL
wlpvr: PVR Services Initialised
LOG: [ status = -1 ] Hello there!

Run the optical flow example with camera input:

reload-dsp-fw.sh tiovx        # load openvx firmware and restart dsps
tiovx-opticalflow             # Run tutorial example with camera input

Logs:

root@am57xx-evm:~# tiovx-opticalflow
 VX_ZONE_INIT:Enabled
 VX_ZONE_ERROR:Enabled
 VX_ZONE_WARNING:Enabled
VSPRINTF_DBG:SYSTEM NOTIFY_INIT: starting
VSPRINTF_DBG: SYSTEM: IPC: Notify init in progress !!!

VSPRINTF_DBG:[0] DUMP ALL PROCS[3]=DSP1
VSPRINTF_DBG:[1] DUMP ALL PROCS[4]=DSP2
VSPRINTF_DBG:Next rx queue to open:QUE_RX_HOST

VSPRINTF_DBG:Just created MsgQue
VSPRINTF_DBG:Created RX task
VSPRINTF_DBG:Next tx queue to open:QUE_RX_DSP1, procId=3
VSPRINTF_DBG:Next tx queue to open:QUE_RX_DSP2, procId=4
VSPRINTF_DBG:Dump all TX queues: procId=3 name=QUE_RX_DSP1 queId=262272, msgSize=68, heapId=0
VSPRINTF_DBG:Dump all TX queues: procId=4 name=QUE_RX_DSP2 queId=196736, msgSize=68, heapId=0
VSPRINTF_DBG:SYSTEM: IPC: SentCfgMsg, procId=3 queuId=262272
VSPRINTF_DBG:SYSTEM: IPC: SentCfgMsg, procId=4 queuId=196736
VSPRINTF_DBG: SYSTEM: IPC: Notify init DONE !!!

VSPRINTF_DBG:>>>> ipcNotifyInit returned: 0

VSPRINTF_DBG:SYSTEM NOTIFY_INIT: done
OK: CAMERA#1 640x480
init done
Using Wayland-EGL
wlpvr: PVR Services Initialised
LOG: [ status = -1 ] Hello there!

After finishing running the OpenVX tutorial example, switch the firmware back to the default for OpenCL:

reload-dsp-fw.sh opencl        # load opencl firmware and restart dsps

5.3.10. ROS and Radar

5.3.10.1. Introduction

The ROS is meta-ros running on top of Linux. It is a collection of software libraries and packages to help you write robotic applications. Both mobile platforms and static manipulators can leverage wide colelction of open-source drivers and tools. ROS framework is mainly based on publisher-subscriber model, and in some cases server-client mode.

It is frequently the case that ROS based applications require sensors enabling interaction with the environment. Various types of sensors can be used: cameras, ultrasound, time-of-flight RGBD cameras, lidars, ...

In case of 3D sensors, typical output is point cloud. Consumer of this information (e.g. for navigation) is decoupled from point cloud producer since format is well defined. Due to modular nature of ROS it is easy to replace one sensor with the other as long as format of information (in this case point cloud) is the same.

An important type of 3D sensor, especially suited for outdoor use cases is mmWave radar (77-81GHz). For this demo IWR/AWR-1443 or 1642 EVM is needed. It is connected to Sitara device over USB connection creating virtual UART.

Optionally to make this platform movable, Kobuki mobile base can be added. Sitara EVM and Radar EVM would be attached to Kobuki, and Sitara EVM running ROS would control Kobuki movement via USB connection. Please note that mobile base is not essentail for verification of ROS on Sitara plus Radar EVM operation.

5.3.10.2. HW Setup

../_images/am572x-evm.png ../_images/radar-evm-1443.png
  • USB to microUSB cable (connecting USB connector on Sitara side with microUSB connector on Radar EVM
  • [optional] Kobuki mobile base (as used by Turtlebot2), http://kobuki.yujinrobot.com/
alternate text

Kobuki mobile base with Sitara and Radar EVMs

Compatibility

The ti_mmwave_rospkg ROS driver package on Sitara is tested with Processor Linux SDK 4.3 which includes meta-ros layer, indigo, (from https://github.com/bmwcarit/meta-ros). For visualization we used ROS indigo distro on Ubuntu Linux box, preffered for compatibility reasons. Please follow installation instructions (for Linux box) at https://wiki.ros.org/indigo/Installation/Ubuntu For this demo, IWR EVM requires mmWave SDK firmware. If different firmware is used on Radar EVM, please follow procedure using UniFlash tool to install mmWave SDK

5.3.10.3. ROS configuration verification

ROS is part of PLSDK 4.3 target filesystem, including mmWave ROS driver, so no additional installation steps are required. ROS is installed in /opt/ros/indigo folder. Only setting up configuration related to specific IP address of target EVM, and Ubuntu x86 host IP address is needed. ROS is distributed meta-ros, with single ROS host acting as a broker for all internode transcations. It runs roscore node and in this case roscore is executed on Sitara. Linux Box will only run ROS RViz node since RViz requires OpenGL desktop support (Sitara only supports OpenGL ES 2.0).

Note

If visualization using RViz is not needed, Linux Box is not necessary for this demo (except to start multiple SSH terminals).

Reconfigure PLSDK for Python3

PLSDK includes ROS packages from meta-ros layer, compiled with Python3 support (build/conf/local.conf : ROS_USE_PYTHON3 = “yes”) As PLSDK default python setting is Python 2.7, filesystem update is required for ROS tests to run:

root@am57xx-evm:/usr/bin# rm python.python
root@am57xx-evm:/usr/bin# rm python-config.python
root@am57xx-evm:/usr/bin# ln -s python3 python.python
root@am57xx-evm:/usr/bin# ln -s python3-config python-config.python

5.3.10.4. ROS between distributed nodes (Sitara and LinuxBox)

1st SSH terminal, to Sitara EVM

Modify /opt/ros/indigo/setup.bash

export ROS_ROOT=/opt/ros/indigo
export PATH=$PATH:/opt/ros/indigo/bin
export LD_LIBRARY_PATH=/opt/ros/indigo/lib
export PYTHONPATH=/usr/lib/python3.5/site-packages:/opt/ros/indigo/lib/python3.5/site-packages
export ROS_MASTER_URI=http://$SITARA_IP_ADDR:11311
export ROS_IP=$SITARA_IP_ADDR
export ROS_HOSTNAME=$SITARA_IP_ADDR
export CMAKE_PREFIX_PATH=/opt/ros/indigo
export ROS_PACKAGE_PATH=/opt/ros/indigo/share
touch /opt/ros/indigo/.catkin

Then, execute

source /opt/ros/indigo/setup.bash
roscore

2nd SSH terminal, to Sitara EVM

source /opt/ros/indigo/setup.bash
rosrun roscpp_tutorials talker

You will see log similar to following:

....[ INFO] [1516637959.231163685]: hello world 5295
[ INFO] [1516637959.331163994]: hello world 5296
[ INFO] [1516637959.431165605]: hello world 5297
[ INFO] [1516637959.531161359]: hello world 5298
[ INFO] [1516637959.631162807]: hello world 5299
[ INFO] [1516637959.731166207]: hello world 5300
[ INFO] [1516637959.831215641]: hello world 5301
[ INFO] [1516637959.931165361]: hello world 5302
[ INFO] [1516637960.031165019]: hello world 5303
[ INFO] [1516637960.131164027]: hello world 5304

3rd SSH terminal, to Linux BOX (Optional)

export ROS_MASTER_URI=http://$SITARA_IP_ADDR:11311
export ROS_IP=$LINUXBOX_IP_ADDR
export ROS_HOSTNAME=$LINUXBOX_IP_ADDR
source /opt/ros/indigo/setup.bash
rosrun roscpp_tutorials listener

You will see log similar to following:

...
data: hello world 5338
---
data: hello world 5339
---
data: hello world 5340
---
data: hello world 5341
---
data: hello world 5342
---
data: hello world 5343
---
data: hello world 5344

5.3.10.5. mmWave ROS node on Sitara and RViz on Linux Box

1st SSH terminal, to Sitara EVM

Start roscore, only if it is not already started

source /opt/ros/indigo/setup.bash roscore

2nd SSH terminal, to Sitara EVM

source /opt/ros/indigo/setup.bash
roslaunch  ti_mmwave_rospkg rviz_1443_3d.launch

Change "rviz_1443_3d.launch to "rviz_1642_2d.launch", based on Radar EVM type (1443 or 1642).

If Kobuki mobile is available, use the command below instead:

roslaunch  ti_mmwave_rospkg plsdk_rviz_1443_3d.launch

Sample log is included:

source /opt/ros/indigo/setup.bash
roslaunch ti_mmwave_rospkg plsdk_rviz_1443_3d.launch

... logging to /home/root/.ros/log/97dfe396-2711-11e8-bd4a-a0f6fdc25c34/roslaunch-am57xx-evm-7487.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.

started roslaunch server http://192.168.0.222:35481/

SUMMARY
========

PARAMETERS
 * /fake_localization/use_map_topic: True
 * /mmWave_Manager/command_port: /dev/ttyACM0
 * /mmWave_Manager/command_rate: 115200
 * /mmWave_Manager/data_port: /dev/ttyACM1
 * /mmWave_Manager/data_rate: 921600
 * /mmWave_Manager/max_allowed_azimuth_angle_deg: 90
 * /mmWave_Manager/max_allowed_elevation_angle_deg: 90
 * /rosdistro: b'<unknown>\n'
 * /rosversion: b'1.11.21\n'

NODES
  /
    fake_localization (fake_localization/fake_localization)
    mmWaveQuickConfig (ti_mmwave_rospkg/mmWaveQuickConfig)
    mmWave_Manager (ti_mmwave_rospkg/ti_mmwave_rospkg)
    octomap_server (octomap_server/octomap_server_node)
    static_tf_map_to_base_radar_link (tf/static_transform_publisher)
    static_tf_map_to_odom (tf/static_transform_publisher)

ROS_MASTER_URI=http://192.168.0.222:11311

core service [/rosout] found
process[mmWave_Manager-1]: started with pid [7505]
process[mmWaveQuickConfig-2]: started with pid [7506]
process[static_tf_map_to_odom-3]: started with pid [7507]
process[static_tf_map_to_base_radar_link-4]: started with pid [7508]
[ INFO] [1520981858.224293205]: mmWaveQuickConfig: Configuring mmWave device using config file: /opt/ros/indigo/share/ti_mmwave_rospkg/cfg/1443_3d.cfg
process[octomap_server-5]: started with pid [7509]
process[fake_localization-6]: started with pid [7517]
[ INFO] [1520981858.367713151]: waitForService: Service [/mmWaveCommSrv/mmWaveCLI] has not been advertised, waiting...
[ INFO] [1520981858.436009564]: Initializing nodelet with 2 worker threads.
[ INFO] [1520981858.480256524]: mmWaveCommSrv: command_port = /dev/ttyACM0
[ INFO] [1520981858.480407967]: mmWaveCommSrv: command_rate = 115200
[ INFO] [1520981858.497923263]: waitForService: Service [/mmWaveCommSrv/mmWaveCLI] is now available.
[ INFO] [1520981858.498667137]: mmWaveQuickConfig: Ignored blank or comment line: '% ***************************************************************'
[ INFO] [1520981858.499059815]: mmWaveQuickConfig: Ignored blank or comment line: '% Created for SDK ver:01.01'
[ INFO] [1520981858.499462577]: mmWaveQuickConfig: Ignored blank or comment line: '% Created using Visualizer ver:1.1.0.1'
[ INFO] [1520981858.505357942]: mmWaveQuickConfig: Ignored blank or comment line: '% Frequency:77'
[ INFO] [1520981858.506164932]: mmWaveQuickConfig: Ignored blank or comment line: '% Platform:xWR14xx'
[ INFO] [1520981858.506843089]: mmWaveQuickConfig: Ignored blank or comment line: '% Scene Classifier:best_range_res'
[ INFO] [1520981858.507514414]: mmWaveQuickConfig: Ignored blank or comment line: '% Azimuth Resolution(deg):15 + Elevation'
[ INFO] [1520981858.508289684]: mmWaveQuickConfig: Ignored blank or comment line: '% Range Resolution(m):0.044'
[ INFO] [1520981858.508999398]: mmWaveQuickConfig: Ignored blank or comment line: '% Maximum unambiguous Range(m):9.01'
[ INFO] [1520981858.509816310]: mmWaveQuickConfig: Ignored blank or comment line: '% Maximum Radial Velocity(m/s):5.06'
[ INFO] [1520981858.510520982]: mmWaveQuickConfig: Ignored blank or comment line: '% Radial velocity resolution(m/s):0.64'
[ INFO] [1520981858.518476684]: mmWaveQuickConfig: Ignored blank or comment line: '% Frame Duration(msec):33.333'
[ INFO] [1520981858.519262364]: mmWaveQuickConfig: Ignored blank or comment line: '% Range Detection Threshold (dB):9'
[ INFO] [1520981858.519957764]: mmWaveQuickConfig: Ignored blank or comment line: '% Range Peak Grouping:disabled'
[ INFO] [1520981858.520157681]: mmWaveDataHdl: data_port = /dev/ttyACM1
[ INFO] [1520981858.520252841]: mmWaveDataHdl: data_rate = 921600
[ INFO] [1520981858.520315142]: mmWaveDataHdl: max_allowed_elevation_angle_deg = 90
[ INFO] [1520981858.520375654]: mmWaveDataHdl: max_allowed_azimuth_angle_deg = 90
[ INFO] [1520981858.520943849]: mmWaveQuickConfig: Ignored blank or comment line: '% Doppler Peak Grouping:disabled'
[ INFO] [1520981858.521671945]: mmWaveQuickConfig: Ignored blank or comment line: '% Static clutter removal:disabled'
[ INFO] [1520981858.522412729]: mmWaveQuickConfig: Ignored blank or comment line: '% ***************************************************************'
[ INFO] [1520981858.523396537]: mmWaveQuickConfig: Sending command: 'sensorStop'
[ INFO] [1520981858.533674630]: mmWaveCommSrv: Sending command to sensor: 'sensorStop'
[ INFO] [1520981858.536083724]: DataUARTHandler Read Thread: Port is open
[ INFO] [1520981858.548926257]: mmWaveCommSrv: Received response from sensor: 'sensorStop
Done
mmwDemo:/>'
[ INFO] [1520981858.550875817]: mmWaveQuickConfig: Command successful (mmWave sensor responded with 'Done')
[ INFO] [1520981858.551745758]: mmWaveQuickConfig: Sending command: 'flushCfg'
[ INFO] [1520981858.559882020]: mmWaveCommSrv: Sending command to sensor: 'flushCfg'
[ INFO] [1520981858.562726084]: mmWaveCommSrv: Received response from sensor: 'flushCfg
Done
mmwDemo:/>'
[ INFO] [1520981858.564378289]: mmWaveQuickConfig: Command successful (mmWave sensor responded with 'Done')
[ INFO] [1520981858.565240748]: mmWaveQuickConfig: Sending command: 'dfeDataOutputMode 1'
[ INFO] [1520981858.573026625]: mmWaveCommSrv: Sending command to sensor: 'dfeDataOutputMode 1'
[ INFO] [1520981858.576915985]: mmWaveCommSrv: Received response from sensor: 'dfeDataOutputMode 1
Done
mmwDemo:/>'
...
mmwDemo:/>'
[ INFO] [1520981858.776118886]: mmWaveQuickConfig: Command successful (mmWave sensor responded with 'Done')
[ INFO] [1520981858.776938726]: mmWaveQuickConfig: Sending command: 'compRangeBiasAndRxChanPhase 0.0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0'
[ INFO] [1520981858.782736816]: mmWaveCommSrv: Sending command to sensor: 'compRangeBiasAndRxChanPhase 0.0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0'
[ INFO] [1520981858.792102024]: mmWaveCommSrv: Received response from sensor: 'compRangeBiasAndRxChanPhase 0.0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0
Done
mmwDemo:/>'
[ INFO] [1520981858.793846462]: mmWaveQuickConfig: Command successful (mmWave sensor responded with 'Done')
[ INFO] [1520981858.794657355]: mmWaveQuickConfig: Sending command: 'measureRangeBiasAndRxChanPhase 0 1.5 0.2'
[ INFO] [1520981858.800233568]: mmWaveCommSrv: Sending command to sensor: 'measureRangeBiasAndRxChanPhase 0 1.5 0.2'
[ INFO] [1520981858.806256139]: mmWaveCommSrv: Received response from sensor: 'measureRangeBiasAndRxChanPhase 0 1.5 0.2
Done
mmwDemo:/>'
[ INFO] [1520981858.807890614]: mmWaveQuickConfig: Command successful (mmWave sensor responded with 'Done')
[ INFO] [1520981858.808687680]: mmWaveQuickConfig: Sending command: 'sensorStart'
[ INFO] [1520981858.814534734]: mmWaveCommSrv: Sending command to sensor: 'sensorStart'
[ INFO] [1520981858.822598283]: mmWaveCommSrv: Received response from sensor: 'sensorStart
Done
mmwDemo:/>'
[ INFO] [1520981858.824211611]: mmWaveQuickConfig: Command successful (mmWave sensor responded with 'Done')
[ INFO] [1520981858.824545077]: mmWaveQuickConfig: mmWaveQuickConfig will now terminate. Done configuring mmWave device using config file: /opt/ros/indigo/share/ti_mmwave_rospkg/cfg/1443_3d.cfg
[mmWaveQuickConfig-2] process has finished cleanly

3rd SSH terminal, to Sitara EVM

Bring up all ROS components for communicting and controlling Kobuki

source /opt/ros/indigo/setup.bash
roslaunch kobuki_node minimal.launch

4th SSH terminal, to Sitara EVM

Start Kobuki teleop console (remotely control Kobuki movement using keyboard)

source /opt/ros/indigo/setup.bash
roslaunch kobuki_keyop safe_keyop.launch

        Operating kobuki from keyboard:
        Forward/back arrows : linear velocity incr/decr.
        Right/left arrows : angular velocity incr/decr.
        Spacebar : reset linear/angular velocities.
        d : disable motors.
        e : enable motors.
        q : quit.

5th SSH terminal, to Linux box

First, install RViz on Ubuntu if this has not been done before:

sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu trusty main" > /etc/apt/sources.list.d/ros-latest.list'
wget https://raw.githubusercontent.com/ros/rosdistro/master/ros.key -O - | sudo apt-key add -
sudo apt-get update
sudo apt-get install ros-indigo-desktop-full

Setup ROS variables on Linux Box (to enable communication with ROS host on Sitara) then start RViz

export ROS_MASTER_URI=http://$SITARA_IP_ADDR:11311 (IP address of Sitara EVM, modify as needed)
export ROS_IP=$LINUX_BOX_IP_ADDR (IP address of Ubuntu machine, modify as needed)
export ROS_HOSTNAME=$SITARA_IP_ADDR (IP address of Ubuntu machine, modify as needed)
source /opt/ros/indigo/setup.bash

rosrun rviz rviz

Alternatively, in order to get Kobuki avatar on the screen, install kobuki_description on Linux box and start RViz by launching view_model from kobuki_description.

git clone https://github.com/yujinrobot/kobuki
cd kobuki
cp -r kobuki_description /opt/ros/indigo/share
roslaunch kobuki_description view_model.launch

In RViz add point cloud from mmWave radar:

  • Click Add->PointCloud2
  • Select /mmWaveDataHdl/RScan from the Topic field dropdown for the PointCloud2 on the left hand panel
  • Increase Size to 0.03, increase Decay Time to 0.25, and Select Style as “Spheres”.
  • In rviz, select map for Fixed Frame in Global Options.
  • If Kobuki is also started, set Reference Frame (left panel) to “map”.

You should see a point cloud image:

../_images/ros_radar_rviz.png

More information can be found in ROS driver document in chapters: “Visualizating the data”, “Reconfiguring the chirp profile”, and “How it works”

Starting GStreamer pipeline for streaming front view camer

It is possible to start GStreamer pipeline on Sitara and receive front-camera view on Linux Box or Windows PC using VLC.

gst-launch-1.0 -e v4l2src device=/dev/video1  io-mode=5 ! 'video/x-raw, \
format=(string)NV12, width=(int)640, height=(int)480, framerate=(fraction)30/1' ! ducatih264enc bitrate=1000 ! queue ! h264parse config-interval=1 ! mpegtsmux  ! udpsink host=192.168.0.100 sync=false port=5000

E.g. on Windows PC (192.168.0.100), you can watch the stream using: “Program Files (x86)VideoLANVLCvlc.exe” udp://@192.168.0.100:5000

alternate text

Multiple windows on Linux Box showing ROS RViz, front camera view and external camera view