3.7. IPC for AM62Px
The AM62Px processors have Cortex-R5F subsystems in addition to a Quad core Cortex-A53 subsystem. Please refer to the AM62Px Technical Reference Manual for details.
This article is geared toward AM62Px users that are running Linux on the Cortex A53 cores. The goal is to help users understand how to establish IPC communication with R5F cores.
There are many facets to this task: building, loading, debugging, memory sharing, etc. This article intends to take incremental steps toward understanding all of those pieces.
3.7.1. Software Dependencies to Get Started
Prerequisites
Processor SDK Linux for AM62Px AM62Px-SDK-Download-page.
Note
Please be sure that you have the same version number for both Processor SDK RTOS and Linux.
Please refer to the MCU+SDK documentation, section “Developer Guides” -> “Understanding inter-processor communication (IPC)” for IPC architecture and builds.
3.7.2. Typical Boot Flow on AM62Px for ARM Linux users
AM62Px SOC’s have multiple processor cores - Cortex-A53, Cortex-R5F. The A53 typically runs a HLOS like Linux/Android. The R5F remote cores run No-OS or RTOS (FreeRTOS etc). In normal operation, the boot loader (U-Boot/SPL) boots and loads the A53 with the HLOS. The A53 then boots the R5F core.
The wakeup R5F firmware runs device manager software (SciServer) along with user application.
The wakeup R5F firmware is integrated as part of tispl.bin
binary
and is started early in the boot process by u-boot R5 SPL right after DDR initialization.
3.7.3. Getting Started with IPC Linux Examples
The figure below illustrates how the Remoteproc/RPMsg driver from the ARM Linux kernel communicates with the IPC driver on a remote processors (e.g. R5F) running RTOS.
In order to setup IPC on remote cores, we provide some pre-built examples in the SDK package that can be run from ARM Linux.
The remoteproc driver is hard-coded to look for specific files when loading the R5F core. Here are the files it looks for on an AM62Px device:
Core Name
RemoteProc Name
Description
Firmware File Name
R5F
79000000.r5f
R5F core(MCU domain)
am62p-mcu-r5f0_0-fw
Generally on a target file system the above files are soft linked to the intended executable FW files:
root@am62pxx-evm:~# ls -l /lib/firmware/
lrwxrwxrwx 1 root root 41 Dec 2 2023 am62p-mcu-r5f0_0-fw -> /usr/lib/firmware/ti-ipc/am62xx/ipc_echo_test_mcu2_0_release_strip.xer5f
For updating wakeup (DM) R5F firmware binary, tispl.bin needs to be recompiled with the new firmware binary as mentioned below :
Go to linux installer and replace the existing R5F wakeup (DM) firmware binary with the new one
host# cp <path_to_new_fw_binary>/ipc_echo_testb_freertos_mcu1_0_release.xer5f <path_to_linux_installer>/board-support/prebuilt-images/ipc_echo_testb_mcu1_0_release_strip.xer5f
Recompile u-boot to regenerate tispl.bin using the top level makefile.
host# make u-boot
Please refer to Top-Level Makefile for more details on Top Level makefile.
Replace the updated tispl.bin containing new R5F firmware binary in the boot partition of sdcard and reboot
host# sudo cp board-support/u-boot_build/a53/tispl.bin /media/$USER/boot
3.7.4. Booting Remote Cores from Linux console/User space
To reload a remote core with new executables, please follow the below steps.
First, identify the remotproc node associated with the remote core:
root@am62pxx-evm:~# head /sys/class/remoteproc/remoteproc*/name
==> /sys/class/remoteproc/remoteproc0/name <==
79000000.r5f
==> /sys/class/remoteproc/remoteproc1/name <==
78000000.r5f
Then, use the sysfs interface to stop the remote core. For example, to stop the R5F
root@am62pxx-evm:~# echo stop > /sys/class/remoteproc/remoteproc0/state
[ 61.497327] remoteproc remoteproc0: stopped remote processor 79000000.r5f
If needed, update the firmware symbolic link to point to a new firmware:
root@am62pxx-evm:/lib/firmware# ln -sf /lib/firmware/ti-ipc/am62pxx/ipc_echo_test_mcu2_0_release_strip.xer5f am62p-mcu-r5f0_0-fw
Finally, use the sysfs interface to start the remote core:
root@am62pxx-evm:~# echo start > /sys/class/remoteproc/remoteproc0/state
[ 1406.013847] remoteproc remoteproc0: powering up 79000000.r5f
[ 1406.020167] remoteproc remoteproc0: Booting fw image am62p-mcu-r5f0_0-fw, size 55272
[ 1406.031012] rproc-virtio rproc-virtio.0.auto: assigned reserved memory node mcu-r5fss-dma-memory-region@9b800000
[ 1406.042534] virtio_rpmsg_bus virtio0: rpmsg host is online
[ 1406.048152] virtio_rpmsg_bus virtio0: creating channel ti.ipc4.ping-pong addr 0xd
[ 1406.048836] rproc-virtio rproc-virtio.0.auto: registered virtio0 (type 7)
[ 1406.055857] virtio_rpmsg_bus virtio0: creating channel rpmsg_chrdev addr 0xe
[ 1406.063759] remoteproc remoteproc0: remote processor 79000000.r5f is now up
Note
The RemoteProc driver does not support a graceful shutdown of R5 and DSP cores in the current Linux Processor SDK. For now, it is recommended to reboot the board when loading new binaries into an R5F or DSP core.
3.7.5. DMA memory Carveouts
System memory is carved out for each remote processor core for IPC and for the remote processor’s code/data section needs. The default memory carveouts (DMA pools) are shown below.
See the devicetree bindings documentation for more details: Documentation/devicetree/bindings/remoteproc/ti,k3-r5f-rproc.yaml
Memory Section
Physical Address
Size
Description
R5F(mcu) Pool
0x9b800000
1MB
IPC (Virtio/Vring buffers)
R5F(mcu) Pool
0x9b900000
15MB
R5F externel code/data mem
R5F(wkup) Pool
0x9c800000
1MB
IPC (Virtio/Vring buffers)
R5F(wkup) Pool
0x9c900000
30MB
R5F externel code/data mem
root@am62pxx-evm:~# dmesg | grep reserved
[ 0.000000] OF: reserved mem: initialized node linux,cma, compatible id shared-dma-pool
[ 0.000000] OF: reserved mem: initialized node rtos-ipc-memory@9b500000, compatible id shared-dma-pool
[ 0.000000] OF: reserved mem: initialized node mcu-r5fss-dma-memory-region@9b 800000, compatible id shared-dma-pool
[ 0.000000] OF: reserved mem: initialized node mcu-r5fss-memory-region@9b900000, compatible id shared-dma-pool
[ 0.000000] OF: reserved mem: initialized node r5f-dma-memory@9c800000, compatible id shared-dma-pool
[ 0.000000] OF: reserved mem: initialized node r5f-memory@9c900000, compatible id shared-dma-pool
Note
The reserved memory sizes listed above are provided as a reference only and subject to change between releases. For latest memory reservations, please refer to the kernel device tree repository : ‘https://git.ti.com/cgit/ti-linux-kernel/ti-linux-kernel/tree/arch/arm64/boot/dts/ti/k3-am62p5-sk.dts?h=ti-linux-6.6.y’
By default the first 1MB of each pool is used for the Virtio and Vring buffers used to communicate with the remote processor core. The remaining carveout is used for the remote core external memory (program code, data, etc).
Note
The resource table entity (which describes the system resources needed by the remote processor) needs to be at the beginning of the remote processor external memory section.
For details on how to adjust the sizes and locations of the remote core pool carveouts, please see section Changing the Memory Map.
3.7.6. Changing the Memory Map
The address and size of the DMA memory carveouts needs to match with the MCU M4F and R5F external memory section sizes in their respective linker mapfiles.
arch/arm64/boot/dts/ti/k3-am62p5-sk.dts
reserved-memory {
#address-cells = <2>;
#size-cells = <2>;
ranges;
wkup_r5fss0_core0_dma_memory_region: r5f-dma-memory@9c800000 {
compatible = "shared-dma-pool";
reg = <0x00 0x9c800000 0x00 0x100000>;
no-map;
};
wkup_r5fss0_core0_memory_region: r5f-dma-memory@9c900000 {
compatible = "shared-dma-pool";
reg = <0x00 0x9c900000 0x00 0x01e00000>;
no-map;
};
mcu_r5fss0_core0_dma_memory_region: r5f-dma-memory@9b800000 {
compatible = "shared-dma-pool";
reg = <0x00 0x9b800000 0x00 0x100000>;
no-map;
};
mcu_r5fss0_core0_memory_region: r5f-dma-memory@9b900000 {
compatible = "shared-dma-pool";
reg = <0x00 0x9b900000 0x00 0x0f00000>;
no-map;
};
};
Warning
Be careful not to overlap carveouts!
Note
The reserved memory sizes listed above are provided as a reference only and subject to change between releases. For latest memory reservations, please refer to the kernel device tree repository : ‘https://git.ti.com/cgit/ti-linux-kernel/ti-linux-kernel/tree/arch/arm64/boot/dts/ti/k3-am62p5-sk.dts?h=10.01.10’
3.7.7. RPMsg Char Driver
The below picture depicts the kernel driver components and the user space device model for using RPMsg Char driver for communicating with the remote processor.
The RPMsg char driver exposes RPMsg endpoints to user-space processes. Multiple user-space applications can use one RPMsg device uniquely by requesting different interactions with the remote service. The RPMsg char driver supports the creation of multiple endpoints for each probed RPMsg char device, enabling the use of the same device for different instances.
RPMsg devices
Each created endpoint device shows up as a single character device in /dev.
The RPMsg bus sits on top of the VirtIO bus. Each virtio name service announcement message creates a new RPMsg device, which is supposed to bind to a RPMsg driver. RPMsg devices are created dynamically:
The remote processor announces the existence of a remote RPMsg service by sending a name service announcement message containing the name of the service (i.e. name of the device), source and destination addresses. The message is handled by the RPMsg bus, which dynamically creates and registers an RPMsg device which represents the remote service. As soon as a relevant RPMsg driver is registered, it is immediately probed by the bus and the two sides can start exchanging messages.
The control interface
The RPMsg char driver provides control interface (in the form of a character device under /dev/rpmsg_ctrlX) allowing user-space to export an endpoint interface for each exposed endpoint. The control interface provides a dedicated ioctl to create an endpoint device.
3.7.8. ti-rpmsg-char library
The ti-rpmsg-char package is located at the ti-rpmsg-char git repo <https://git.ti.com/cgit/rpmsg/ti-rpmsg-char>.
A thin userspace rpmsg char library is provided. The library abstracts the rpmsg char driver usage from userspace. This library provides an easy means to identify and open rpmsg character devices created by the kernel rpmsg-char driver.
This library support TI K3 family of devices including AM62Px.
The library provides 4 basic APIs wrapping all the rpmsg char driver calls. Please check documentation in ‘include/ti_rpmsg_char.h’ for details..
- rpmsg_char_init()
This function checks that the needed kernel drivers (remoteproc. rpmsg, virtio) are installed and accessible from the user space. Further it also checks the SoC device supports the requested remote processor.
- rpmsg_char_exit()
This function finalizes and performs all the de-initialization and any cleanup on the library. This is the last function that needs to be invoked after all usage is done as part of the application’s cleanup. This only need to be invoked once in an application, there is no reference counting. The function also needs to be invoked in any application’s signal handlers to perform the necessary cleanup of stale rpmsg endpoint devices.
- rpmsg_char_open()
Function to create and access a rpmsg endpoint device for a given rpmsg device.
- rpmsg_char_close()
Function to close and delete a previously created local endpoint
3.7.9. RPMsg examples:
RPMsg user space example
Note
These steps were tested on Ubuntu 18.04. Later versions of Ubuntu may need different steps
Note
rpmsg_char_simple comes prepackaged in prebuilt SDK wic images (e.g. tisdk-default-image-j721s2-evm.wic.xz) that comes with the release and below example can be directly run (Step 6) if using the prebuilt wic images
Access source code in the git repo here. rproc_id is defined at include/rproc_id.h.
Build the Linux Userspace example for Linux RPMsg by following the steps in the top-level README:
Download the git repo
Install GNU autoconf, GNU automake, GNU libtool, and v8 compiler as per the README
Perform the Build Steps as per the README
Linux RPMsg can be tested with prebuilt binaries that are packaged in the SDK wic image filesystem:
Copy the Linux RPMsg Userspace application from <ti-rpmsg-char_repo>/examples/rpmsg_char_simple into the board’s Linux filesystem.
Ensure that the remote core symbolic link points to the desired binary file in /lib/firmware/ti-ipc/j7xx/. Update the symbolic link if needed. Reference section Booting Remote Cores from Linux console/User space for more information.
Run the example on the board:
Usage: rpmsg_char_simple [-r <rproc_id>] [-n <num_msgs>] [-d <rpmsg_dev_name] [-p <remote_endpt]
Defaults: rproc_id: 0 num_msgs: 100 rpmsg_dev_name: NULL remote_endpt: 14
For remote proc ids, please refer to : 'https://git.ti.com/cgit/rpmsg/ti-rpmsg-char/tree/include/rproc_id.h'
RPMsg kernel space example
The kernel space example is in the Linux Processor SDK under samples/rpmsg/rpmsg_client_sample.c
- Build the kernel module rpmsg_client_sample:
Note
rpmsg_client_sample comes prepackaged in prebuilt SDK wic images (e.g. tisdk-default-image-j721s2-evm.wic.xz) that comes with the release and below example can be directly run (Step 5) if using the prebuilt wic images
Set up the kernel config to build the rpmsg client sample.
Use menuconfig to verify Kernel hacking > Sample kernel code > Build rpmsg client sample is M:
$ make ARCH=arm64 CROSS_COMPILE=aarch64-none-linux-gnu- distclean
$ make ARCH=arm64 CROSS_COMPILE=aarch64-none-linux-gnu- defconfig ti_arm64_prune.config
$ make ARCH=arm64 CROSS_COMPILE=aarch64-none-linux-gnu- menuconfig
Symbol: SAMPLE_RPMSG_CLIENT [=m]
│ Type : tristate
│ Defined at samples/Kconfig:116
│ Prompt: Build rpmsg client sample -- loadable modules only
│ Depends on: SAMPLES [=y] && RPMSG [=y] && m && MODULES [=y]
│ Location:
│ -> Kernel hacking
│ -> Sample kernel code (SAMPLES [=y])
│ (1) -> Build rpmsg client sample -- loadable modules only (SAMPLE_RPMSG_CLIENT [=m])
Make the kernel and modules. Multithreading with –j is optional:
$ make ARCH=arm64 CROSS_COMPILE=aarch64-none-linux-gnu- -j8
Linux RPMsg can be tested with prebuilt binaries that are packaged in the SDK wic image filesystem:
Copy the Linux RPMsg kernel driver from <Linux_SDK>/board-support/linux-x.x.x/samples/rpmsg/rpmsg_client_sample.ko into the board’s Linux filesystem.
Ensure that the remote core symbolic link points to the desired binary file in /lib/firmware/ti-ipc/j7xx/. Update the symbolic link if needed. Reference section Booting Remote Cores from Linux console/User space for more information.
Run the example on the board: