Vision Apps User Guide
Trouble shooting build and run errors

Trouble shooting build errors

I see the error "fatal error: ion/ion.h: No such file or directory" when building vision_apps

vision_apps needs PSDKLA target filesystem to be installed as mentioned in Step 2: Download and install PSDKLA and Step 4: Download and install additional dependencies This tisdk-rootfs-image-j7-evm.tar.xz when untarr'ed at ${PSDKRA_PATH}/targetfs, will contain the file ion.h at usr/include/ion.

If you havent installed the target filesystem you will see a error like below,

[GCC] Compiling C99 app_mem_linux_ion.c
/ti/j7presi/workarea/vision_apps/utils/mem/src/app_mem_linux_ion.c:76:10: fatal error: ion/ion.h: No such file or directory
#include <ion/ion.h>
^~~~~~~~~~~
compilation terminated.

How do I install GCC tools for ARM A72 ?

Running the script mentioned in Step 4: Download and install additional dependencies downloads and installs the requried GCC tools.

This needs proxy to be setup as mentioned in Step 3: Proxy Setup

After installation you should see GCC compilers at below path

${PSDKRA_PATH}/gcc-arm-8.3-2019.03-x86_64-aarch64-linux-gnu
${PSDKRA_PATH}/gcc-arm-8.3-2019.03-x86_64-arm-linux-gnueabihf
${PSDKRA_PATH}/gcc-linaro-7.2.1-2017.11-x86_64_aarch64-elf
Note
If this step is unsucessful then you will see compile errors when compiling A72 files.

You can also manually download and install these packages by refering to the download links in setup_psdk_rtos_auto.sh.

If you installed these packages at a different path then modify below variables in tiovx/psdkra_tools_path.mak to point to your install folder

GCC_SYSBIOS_ARM_ROOT ?= $(PSDK_PATH)/gcc-linaro-7.2.1-2017.11-x86_64_aarch64-elf
GCC_LINUX_ARM_ROOT ?= $(PSDK_PATH)/gcc-arm-8.3-2019.03-x86_64-aarch64-linux-gnu

What does setup_psdk_rtos_auto.sh do and can I skip executing this ?

It is HIGHLY recommended to NOT skip executing setup_psdk_rtos_auto.sh as mentioned in Step 4: Download and install additional dependencies.

However some steps within this are optional, depending on your use-case and can be skipped as mentioned below.

The script setup_psdk_rtos_auto.sh does the below,

  • Installs packages on your local linux machine using "apt-get install". These are required to build PSDKRA esp in PC emulation mode. This needs sudo permission to install the packages.
    Note
    In case you don't want to install packages with sudo permission pass below argument to this script to skip this step.
    --skip_sudo
    This step can be skipped. Make sure to keep the flag BUILD_EMULATION_MODE=no in tiovx/build_flags.mak. And if any tool is not found during build, install it separately using "apt-get install"
  • It extracts PSDKLA target filesystem to the folder ${PSDKRA_PATH}/targetfs. This is needed for some files in PSDKRA to compile and link

    Note
    This step should NOT be skipped
  • It extracts PSDKLA boot files to a folder ${PSDKRA_PATH}/bootfs. This is a temporary staging area to hold the prebuilt linux SPL, uboot, and other boot related files before they are copied to SD card
    Note
    This step should NOT be skipped
  • Downloads and installs GCC compilers for ARM64, ARM32 for Linux and ARM64 for TI-RTOS (see How do I install GCC tools for ARM A72 ?)
    Note
    This step should NOT be skipped
  • Installs "glm", "glew" tools for PC Linux. These are needed for PC emulation mode ONLY.
    Note
    You can skip this step. Make sure to keep the flag BUILD_EMULATION_MODE=no in tiovx/build_flags.mak
  • Installs pip3 for python3. This is needed to run PyTIOVX and PyPSDK_RTOS_TOOLS code generation tools.
    Note
    You can skip this step. It does not affect code compile and run.

Trouble shooting run time errors

I dont see any print on UART terminal ?

  • Make sure the processor board is connected tightly on the EVM. A loose connection between processor board and base board will result in SoC boot failing.
  • Make sure the correct UART port is used
  • Make sure UART settings on PC are correct
  • Make sure the boot pin selection is correct
  • Make sure the SD card has the boot files

Refer Step 4: Run on EVM for EVM setup. Refer Step 1: Prepare SD card for boot (one time only) for SD card setup.

What is the purpose of different files in bootfs partition ?

File Purpose
tiboot3.bin R5F SPL, first stage bootloader, ROM boots this file on MCU R5F (mcu1_0)
sysfw.itb DMSC firmware, R5F SPL boots DMSC FW when it starts
sysfw-psdkra.itb DMSC firmware used in PSDKRA, this is same as sysfw.itb only here MSMC L3 cache is set to 0 bytes. Since in PSDKRA, TIDL uses MSMC RAM as data RAM instead of L3 cache
tispl.bin A72 SPL which will the load uboot
u-boot.img A72 uboot
uenv.txt additional configuration given to uboot. Largely this specifies the dtbo's to apply when booting the Linux kernel
uenv.txt.psdkra dtbo's that PSDKRA applies when it boots linux
  • When steps in Step 1: Prepare SD card for boot (one time only) are followed the boot files are copied from ${PSDKRA_PATH}/bootfs to SD card bootfs partition.
  • When make linux_fs_install_sd is called,
    • default sysfw.itb is replaced by sysfw-psdkra.itb
    • and also default uenv.txt is replaced by vision_apps/apps/basic_demos/app_linux_fs_files/uenv.txt

What are the filesystem changes done by vision apps on top of default PSDKLA filesystem

  • See makefile target linux_fs_install_sd in vision_apps/makerules/makefile_linux_arm.mak for exact filesystem changes that are done
  • In summary
    • /lib/firmware/j7-*-fw are changed to point to vision_apps firmwares or executables for C6x, C7x and R5F
    • uenv.txt and sysfw.itb are replaced in bootfs partition to use files specific to vision_apps
    • uenv.txt applies the dtbo's k3-j721e-auto-common.dtbo k3-j721e-vision-apps.dtbo
      • These dtbo adjust the memory map based on vision_apps requirements
      • k3-j721e-vision-apps.dtbo
        • disables display on A72 so that R5F can control display
        • disables i2c1 on A72 which is used for HDMI display control by R5F
        • disables i2c6 on A72 which is used for CSI2RX sensor control by R5F
        • enables ION contigous memory allocator and reserves a heap for ION
        • reserves memory for shared memory between different CPUs
    • Disables vxd-dec.ko, video decode, on A72 so that R5F can control it
    • Updates etc/security/limits.conf to increase limit of max open files in a process. This is need when using ION since in ION every memory alloc is a file handle. And the default file open limit is too small and max number of open ION alloc's is limited in this case.

What is ION and why I dont see CMEM anymore ?

  • ION is the contiguous memory allocator used in Linux starting PSDKRA v6.1.0.
  • It replaces CMEM.
  • ION is more modern and upstream friendly vs CMEM.
  • See vision_apps/apps/basic_demos/app_linux_arm_mem for simple application which uses ION memory allocator.

How do I know the remote cores like C6x, C7x, R5F booted correctly ?

Here we assume you are able to reach the Linux login prompt on the EVM and want to know state of remote cores like C6x, C7x, R5F

  • Login at the login prompt as shown below
    Arago 2019.09 j7-evm ttyS2
    j7-evm login: root
  • Do below to see logs from remote cores

    root@j7-evm:/opt/vision_apps# dmesg | grep rpmsg

    You should see something like below

    [ 13.239334] virtio_rpmsg_bus virtio0: rpmsg host is online
    [ 13.272721] virtio_rpmsg_bus virtio0: creating channel rpmsg_chrdev addr 0xd
    [ 13.289588] virtio_rpmsg_bus virtio1: rpmsg host is online
    [ 13.310739] virtio_rpmsg_bus virtio1: creating channel rpmsg_chrdev addr 0xd
    [ 13.586987] virtio_rpmsg_bus virtio2: rpmsg host is online
    [ 13.605751] virtio_rpmsg_bus virtio2: creating channel rpmsg_chrdev addr 0xd
    [ 14.085643] virtio_rpmsg_bus virtio3: rpmsg host is online
    [ 14.094299] virtio_rpmsg_bus virtio3: creating channel rpmsg_chrdev addr 0xd
    [ 14.113023] virtio_rpmsg_bus virtio3: creating channel rpmsg_chrdev addr 0x15
    [ 14.128012] virtio_rpmsg_bus virtio2: creating channel rpmsg_chrdev addr 0x15
    [ 14.141000] virtio_rpmsg_bus virtio2: creating channel ti.ipc4.ping-pong addr 0xe
    [ 14.155106] virtio_rpmsg_bus virtio0: creating channel rpmsg_chrdev addr 0x15
    [ 14.167786] virtio_rpmsg_bus virtio0: creating channel ti.ipc4.ping-pong addr 0xe
    [ 14.181318] virtio_rpmsg_bus virtio1: creating channel rpmsg_chrdev addr 0x15
    [ 14.193930] virtio_rpmsg_bus virtio1: creating channel ti.ipc4.ping-pong addr 0xe
    [ 14.206737] virtio_rpmsg_bus virtio3: creating channel ti.ipc4.ping-pong addr 0xe
  • virtio_rpmsg_bus virtioN: rpmsg host is online means a remote core was booted, by default in vision apps you should see virtio0 to virtio3 "online" representing C6x-1, C6x-2, mcu2-1, c7x-1
  • Below lines for each virtioN indicate that they were able to initialize themselves and established IPC with linux
    [ 14.094299] virtio_rpmsg_bus virtio3: creating channel rpmsg_chrdev addr 0xd
    [ 14.113023] virtio_rpmsg_bus virtio3: creating channel rpmsg_chrdev addr 0x15
    [ 14.206737] virtio_rpmsg_bus virtio3: creating channel ti.ipc4.ping-pong addr 0xe
    So if you see "host in online" but not the "creating channel" then the CPU was booted but when it was initializing it failed somewhere.
  • Do below to see logs from remote cores on Linux A72
    cd /opt/vision_apps
    source ./vision_apps_init.sh
    • vision_apps_init.sh runs a process called vx_app_linux_arm_remote_log.out in the background.
      • This process continuously monitors a shared memory and logs strings from remote cores to the linux terminal
      • Make sure this script is invoked only once after EVM power ON.
    • You should see something as shown in these logs [TXT]
    • The main lines to look for are shown below
      [MCU2_1] 0.084681 s: IPC: Echo status: mpu1_0[x] mcu2_1[s] C66X_1[P] C66X_2[P] C7X_1[P]
      [C6x_1 ] 0.842620 s: IPC: Echo status: mpu1_0[x] mcu2_1[P] C66X_1[s] C66X_2[P] C7X_1[P]
      [C6x_2 ] 0.792484 s: IPC: Echo status: mpu1_0[x] mcu2_1[P] C66X_1[P] C66X_2[s] C7X_1[P]
      [C7x_1 ] 0.395258 s: IPC: Echo status: mpu1_0[x] mcu2_1[P] C66X_1[P] C66X_2[P] C7X_1[s]
    • The "P" next to each CPU's log indicates that for example [C7x_1 ] was able to talk to mcu2_1[P] C66X_1[P] C66X_2[P]
    • You will always see mpu1_0[x] so ignore this.
    • If you dont see the "P" then IPC with that CPU has failed for some reason
  • You can also run a sample unit level IPC test to confirm linux is able to talk to all CPUs by running below test on the EVM.
    cd /opt/vision_apps
    ./vx_app_linux_arm_ipc.out
    You should see something as shown in these logs [TXT]
    • If you see any failures or the application hangs then IPC with the remote core has failed and for some reason the remote core did not initialize as expected.

How can I confirm the memory carve outs for various CPUs and remote cores are applied correctly by Linux ?

You should see something like below early in the boot. Compare this with your dts/dtsi/dtso file.

[ 0.000000] Reserved memory: created DMA memory pool at 0x00000000a0000000, size 1 MiB
[ 0.000000] OF: reserved mem: initialized node r5f-dma-memory@a0000000, compatible id shared-dma-pool
[ 0.000000] Reserved memory: created DMA memory pool at 0x00000000a0100000, size 15 MiB
[ 0.000000] OF: reserved mem: initialized node r5f-memory@a0100000, compatible id shared-dma-pool
[ 0.000000] Reserved memory: created DMA memory pool at 0x00000000a1000000, size 1 MiB
[ 0.000000] OF: reserved mem: initialized node r5f-dma-memory@a1000000, compatible id shared-dma-pool
[ 0.000000] Reserved memory: created DMA memory pool at 0x00000000a1100000, size 15 MiB
[ 0.000000] OF: reserved mem: initialized node r5f-memory@a1100000, compatible id shared-dma-pool
[ 0.000000] Reserved memory: created DMA memory pool at 0x00000000a2000000, size 1 MiB
[ 0.000000] OF: reserved mem: initialized node r5f-dma-memory@a2000000, compatible id shared-dma-pool
[ 0.000000] Reserved memory: created DMA memory pool at 0x00000000a2100000, size 31 MiB
[ 0.000000] OF: reserved mem: initialized node r5f-memory@a2100000, compatible id shared-dma-pool
[ 0.000000] Reserved memory: created DMA memory pool at 0x00000000a4000000, size 1 MiB
[ 0.000000] OF: reserved mem: initialized node r5f-dma-memory@a3000000, compatible id shared-dma-pool
[ 0.000000] Reserved memory: created DMA memory pool at 0x00000000a4100000, size 63 MiB
[ 0.000000] OF: reserved mem: initialized node r5f-memory@a3100000, compatible id shared-dma-pool
[ 0.000000] Reserved memory: created DMA memory pool at 0x00000000a8000000, size 1 MiB
[ 0.000000] OF: reserved mem: initialized node r5f-dma-memory@a4000000, compatible id shared-dma-pool
[ 0.000000] Reserved memory: created DMA memory pool at 0x00000000a8100000, size 15 MiB
[ 0.000000] OF: reserved mem: initialized node r5f-memory@a4100000, compatible id shared-dma-pool
[ 0.000000] Reserved memory: created DMA memory pool at 0x00000000a9000000, size 1 MiB
[ 0.000000] OF: reserved mem: initialized node r5f-dma-memory@a5000000, compatible id shared-dma-pool
[ 0.000000] Reserved memory: created DMA memory pool at 0x00000000a9100000, size 15 MiB
[ 0.000000] OF: reserved mem: initialized node r5f-memory@a5100000, compatible id shared-dma-pool
[ 0.000000] Reserved memory: created DMA memory pool at 0x00000000aa000000, size 1 MiB
[ 0.000000] OF: reserved mem: initialized node c66-dma-memory@a6000000, compatible id shared-dma-pool
[ 0.000000] Reserved memory: created DMA memory pool at 0x00000000aa100000, size 63 MiB
[ 0.000000] OF: reserved mem: initialized node c66-memory@a6100000, compatible id shared-dma-pool
[ 0.000000] Reserved memory: created DMA memory pool at 0x00000000ae000000, size 1 MiB
[ 0.000000] OF: reserved mem: initialized node c66-dma-memory@a7000000, compatible id shared-dma-pool
[ 0.000000] Reserved memory: created DMA memory pool at 0x00000000ae100000, size 31 MiB
[ 0.000000] OF: reserved mem: initialized node c66-memory@a7100000, compatible id shared-dma-pool
[ 0.000000] Reserved memory: created DMA memory pool at 0x00000000b0000000, size 1 MiB
[ 0.000000] OF: reserved mem: initialized node c71-dma-memory@a8000000, compatible id shared-dma-pool
[ 0.000000] Reserved memory: created DMA memory pool at 0x00000000b0100000, size 127 MiB
[ 0.000000] OF: reserved mem: initialized node c71-memory@a8100000, compatible id shared-dma-pool
[ 0.000000] Reserved memory: created DMA memory pool at 0x00000000b8000000, size 32 MiB
[ 0.000000] OF: reserved mem: initialized node vision_apps-dma-memory@b8000000, compatible id shared-dma-pool
[ 0.000000] Reserved memory: created DMA memory pool at 0x00000000bc000000, size 576 MiB
[ 0.000000] OF: reserved mem: initialized node vision_apps_shared-memories@bc000000, compatible id shared-dma-pool

Where can I find sample logs for different applications ?

When something does not work as expected, sometimes it helps to see sample working logs and compare against the failing system. Sample logs from a run of vision apps is located here [FOLDER].

Note
All logs of all demos may not be present here.