3.5. IPC for J721S2

The J721S2 processors have Cortex-R5F and C7x DSP subsystems in addition to a dual core Cortex-A72 subsystem. Please refer to the J721S2 Technical Reference Manual for details.

This article is geared toward J721S2 users that are running Linux on the Cortex A72 cores. The goal is to help users understand how to establish IPC communication with the C7x DSP and R5F cores.

There are many facets to this task: building, loading, debugging, memory sharing, etc. This article intends to take incremental steps toward understanding all of those pieces.

3.5.1. Typical Boot Flow on J721S2 for ARM Linux users

J721S2 SOC’s have multiple processor cores - Cortex-A72, Cortex-R5F and DSP cores.

The MCU R5F firmware runs device manager software (SciServer). The MCU R5F firmware (DM) is integrated as part of tispl.bin binary and is started early in the boot process, right after DDR initialization, by u-boot SPL running on MCU R5.

The A72 typically runs a HLOS like Linux/Android. The C7x and R5F remote cores run No-OS or RTOS (FreeRTOS etc). In normal operation, the boot loader (U-Boot/SPL) boots and loads the A72 with the HLOS. The A72 then boots the C7x and R5F cores.

3.5.2. Getting Started with IPC Linux Examples

The figure below illustrates how the Remoteproc/RPMsg driver from the ARM Linux kernel communicates with the IPC driver on a remote processors (e.g. R5F) running RTOS.

../_images/LinuxIPC_with_RTOS_Slave.png

In order to setup IPC on remote cores, we provide some pre-built examples in the SDK package that can be run from ARM Linux.

The remoteproc driver is hard-coded to look for specific files when loading the R5F and C7x cores. Here are the files it looks for on an J721S2 device:

+------------------+-----------------+----------------------+-----------------------+
| Core Name        | RemoteProc Name | Description          | Firmware File Name    |
+==================+=================+======================+=======================+
| C7x              | 64800000.c7x    | C7x core             | j721s2-c71_0-fw       |
+------------------+-----------------+----------------------+-----------------------+
| C7x              | 65800000.c7x    | C7x core             | j721s2-c71_1-fw       |
+------------------+-----------------+----------------------+-----------------------+
| R5F              | 41000000.r5f    | R5F core(MCU domain) | j721s2-mcu-r5f0_0-fw  |
+------------------+-----------------+----------------------+-----------------------+
| R5F              | 41400000.r5f    | R5F core(MCU domain) | j721s2-mcu-r5f0_1-fw  |
+------------------+-----------------+----------------------+-----------------------+
| R5F              | 5c00000.r5f     | R5F core(MAIN domain)| j721s2-main-r5f0_0-fw |
+------------------+-----------------+----------------------+-----------------------+
| R5F              | 5d00000.r5f     | R5F core(MAIN domain)| j721s2-main-r5f0_1-fw |
+------------------+-----------------+----------------------+-----------------------+
| R5F              | 5e00000.r5f     | R5F core(MAIN domain)| j721s2-main-r5f1_0-fw |
+------------------+-----------------+----------------------+-----------------------+
| R5F              | 5f00000.r5f     | R5F core(MAIN domain)| j721s2-main-r5f1_1-fw |
+------------------+-----------------+----------------------+-----------------------+

Generally on a target file system the above files are soft linked to the intended executable FW files:

root@j721s2-evm:~# ls -l /lib/firmware/
lrwxrwxrwx  1 root root      60 Feb 24  2023 j721s2-c71_0-fw -> /lib/firmware/ti-ipc/j721s2/ipc_echo_test_c7x_1_release_strip.xe71
lrwxrwxrwx  1 root root      60 Feb 24  2023 j721s2-c71_1-fw -> /lib/firmware/ti-ipc/j721s2/ipc_echo_test_c7x_2_release_strip.xe71
lrwxrwxrwx  1 root root      62 Feb 24  2023 j721s2-main-r5f0_0-fw -> /lib/firmware/ti-ipc/j721s2/ipc_echo_test_mcu2_0_release_strip.xer5f
lrwxrwxrwx  1 root root      62 Feb 24  2023 j721s2-main-r5f0_1-fw -> /lib/firmware/ti-ipc/j721s2/ipc_echo_test_mcu2_1_release_strip.xer5f
lrwxrwxrwx  1 root root      62 Feb 24  2023 j721s2-main-r5f1_0-fw -> /lib/firmware/ti-ipc/j721s2/ipc_echo_test_mcu3_0_release_strip.xer5f
lrwxrwxrwx  1 root root      62 Feb 24  2023 j721s2-main-r5f1_1-fw -> /lib/firmware/ti-ipc/j721s2/ipc_echo_test_mcu3_1_release_strip.xer5f
lrwxrwxrwx  1 root root      63 Feb 24  2023 j721s2-mcu-r5f0_0-fw -> /lib/firmware/ti-ipc/j721s2/ipc_echo_testb_mcu1_0_release_strip.xer5f
lrwxrwxrwx  1 root root      62 Feb 24  2023 j721s2-mcu-r5f0_1-fw -> /lib/firmware/ti-ipc/j721s2/ipc_echo_test_mcu1_1_release_strip.xer5f

For updating MCU (DM) R5F firmware binary, tispl.bin needs to be recompiled with the new firmware binary as mentioned below :

  1. Go to linux installer and replace the existing R5F MCU (DM) firmware binary with the new one

host#  cp <path_to_new_fw_binary>/ipc_echo_testb_freertos_mcu1_0_release.xer5f <path_to_linux_installer>/board-support/prebuilt-images/ti-dm/j721s2/ipc_echo_testb_mcu1_0_release_strip.xer5f
  1. Recompile u-boot to regenerate tispl.bin using the top level makefile.

host# make u-boot

Please refer to Top-Level Makefile for more details on Top Level makefile.

  1. Replace the updated tispl.bin containing new R5F firmware binary in the boot partition of sdcard and reboot

host# sudo cp board-support/u-boot_build/A72/tispl.bin  /media/$USER/boot

3.5.3. Booting Remote Cores from Linux console/User space

To reload a remote core with new executables, please follow the below steps.

First, identify the remotproc node associated with the remote core:

root@j721s2-evm:~#  head /sys/class/remoteproc/remoteproc*/name
==> /sys/class/remoteproc/remoteproc0/name <==
64800000.dsp

==> /sys/class/remoteproc/remoteproc1/name <==
65800000.dsp

==> /sys/class/remoteproc/remoteproc2/name <==
41000000.r5f

==> /sys/class/remoteproc/remoteproc3/name <==
5c00000.r5f

==> /sys/class/remoteproc/remoteproc4/name <==
5d00000.r5f

==> /sys/class/remoteproc/remoteproc5/name <==
5e00000.r5f

==> /sys/class/remoteproc/remoteproc6/name <==
5f00000.r5f

Then, use the sysfs interface to stop the remote core. For example, to stop the C71x

root@j721s2-evm:~# echo stop > /sys/class/remoteproc/remoteproc0/state
[ 1964.316965] remoteproc remoteproc0: stopped remote processor 64800000.dsp

If needed, update the firmware symbolic link to point to a new firmware:

root@j721s2-evm:/lib/firmware# ln -sf /lib/firmware/ti-ipc/j721s2/ipc_echo_test_c7x_1_release_strip.xe71 j721s2-c71_0-fw

Finally, use the sysfs interface to start the remote core:

root@j721s2-evm:/lib/firmware# echo start > /sys/class/remoteproc/remoteproc0/state
[ 2059.504473] remoteproc remoteproc0: powering up 64800000.dsp
[ 2059.517464] remoteproc remoteproc0: Booting fw image j721s2-c71_0-fw, size 10488888
[ 2059.525198] remoteproc remoteproc0: unsupported resource 65538
[ 2059.531547] k3-dsp-rproc 64800000.dsp: booting DSP core using boot addr = 0xa6e00000
[ 2059.539547]  remoteproc0#vdev0buffer: assigned reserved memory node c71-dma-memory@a6000000
[ 2059.549227] virtio_rpmsg_bus virtio0: rpmsg host is online
[ 2059.554794]  remoteproc0#vdev0buffer: registered virtio0 (type 7)
[ 2059.558812] virtio_rpmsg_bus virtio0: creating channel ti.ipc4.ping-pong addr 0xd
[ 2059.561730] remoteproc remoteproc0: remote processor 64800000.dsp is now up
[ 2059.569800] virtio_rpmsg_bus virtio0: creating channel rpmsg_chrdev addr 0xe

Note

The above process is for the gracful remote core shutdown and start. In some cases, graceful shutdown may not work. In such cases, it is recommended to put new firmwares in /lib/firmware location and do a reboot.

3.5.4. DMA memory Carveouts

System memory is carved out for each remote processor core for IPC and for the remote processor’s code/data section needs. The default memory carveouts (DMA pools) are shown below.

See the devicetree bindings documentation for more details: Documentation/devicetree/bindings/remoteproc/ti,k3-r5f-rproc.yaml

+------------------+--------------------+---------+----------------------------+
| Memory Section   | Physical Address   | Size    | Description                |
+==================+====================+=========+============================+
| C7x Pool         | 0xa6000000         | 1MB     | IPC (Virtio/Vring buffers) |
+------------------+--------------------+---------+----------------------------+
| C7x Pool         | 0xa6100000         | 15MB    | C7x externel code/data mem |
+------------------+--------------------+---------+----------------------------+
| C7x Pool         | 0xa7000000         | 1MB     | IPC (Virtio/Vring buffers) |
+------------------+--------------------+---------+----------------------------+
| C7x Pool         | 0xa7100000         | 15MB    | C7x externel code/data mem |
+------------------+--------------------+---------+----------------------------+
| R5F(mcu) Pool    | 0xa0000000         | 1MB     | IPC (Virtio/Vring buffers) |
+------------------+--------------------+---------+----------------------------+
| R5F(mcu) Pool    | 0xa0100000         | 15MB    | R5F externel code/data mem |
+------------------+--------------------+---------+----------------------------+
| R5F(mcu) Pool    | 0xa1000000         | 1MB     | IPC (Virtio/Vring buffers) |
+------------------+--------------------+---------+----------------------------+
| R5F(mcu) Pool    | 0xa1100000         | 15MB    | R5F externel code/data mem |
+------------------+--------------------+---------+----------------------------+
| R5F(main) Pool   | 0xa2000000         | 1MB     | IPC (Virtio/Vring buffers) |
+------------------+--------------------+---------+----------------------------+
| R5F(main) Pool   | 0xa2100000         | 15MB    | R5F externel code/data mem |
+------------------+--------------------+---------+----------------------------+
| R5F(main) Pool   | 0xa3000000         | 1MB     | IPC (Virtio/Vring buffers) |
+------------------+--------------------+---------+----------------------------+
| R5F(main) Pool   | 0xa3100000         | 15MB    | R5F externel code/data mem |
+------------------+--------------------+---------+----------------------------+
| R5F(main) Pool   | 0xa4000000         | 1MB     | IPC (Virtio/Vring buffers) |
+------------------+--------------------+---------+----------------------------+
| R5F(main) Pool   | 0xa4100000         | 15MB    | R5F externel code/data mem |
+------------------+--------------------+---------+----------------------------+
| R5F(main) Pool   | 0xa5000000         | 1MB     | IPC (Virtio/Vring buffers) |
+------------------+--------------------+---------+----------------------------+
| R5F(main) Pool   | 0xa5100000         | 15MB    | R5F externel code/data mem |
+------------------+--------------------+---------+----------------------------+


root@j721s2-evm:~# dmesg | grep Reserved
[    0.000000] Reserved memory: created DMA memory pool at 0x00000000a0000000, size 1 MiB
[    0.000000] Reserved memory: created DMA memory pool at 0x00000000a0100000, size 15 MiB
[    0.000000] Reserved memory: created DMA memory pool at 0x00000000a1000000, size 1 MiB
[    0.000000] Reserved memory: created DMA memory pool at 0x00000000a1100000, size 15 MiB
[    0.000000] Reserved memory: created DMA memory pool at 0x00000000a2000000, size 1 MiB
[    0.000000] Reserved memory: created DMA memory pool at 0x00000000a2100000, size 15 MiB
[    0.000000] Reserved memory: created DMA memory pool at 0x00000000a3000000, size 1 MiB
[    0.000000] Reserved memory: created DMA memory pool at 0x00000000a3100000, size 15 MiB
[    0.000000] Reserved memory: created DMA memory pool at 0x00000000a4000000, size 1 MiB
[    0.000000] Reserved memory: created DMA memory pool at 0x00000000a4100000, size 15 MiB
[    0.000000] Reserved memory: created DMA memory pool at 0x00000000a5000000, size 1 MiB
[    0.000000] Reserved memory: created DMA memory pool at 0x00000000a5100000, size 15 MiB
[    0.000000] Reserved memory: created DMA memory pool at 0x00000000a6000000, size 1 MiB
[    0.000000] Reserved memory: created DMA memory pool at 0x00000000a6100000, size 15 MiB
[    0.000000] Reserved memory: created DMA memory pool at 0x00000000a7000000, size 1 MiB
[    0.000000] Reserved memory: created DMA memory pool at 0x00000000a7100000, size 15 MiB
[    0.000000] cma: Reserved 512 MiB at 0x00000000dfc00000

Note

The reserved memory sizes listed above are provided as a reference only and subject to change between releases. For latest memory reservations, please refer to the kernel device tree repository : https://git.ti.com/cgit/ti-linux-kernel/ti-linux-kernel/tree/arch/arm64/boot/dts/ti/k3-j721s2-som-p0.dtsi?h=ti-linux-6.1.y

By default the first 1MB of each pool is used for the Virtio and Vring buffers used to communicate with the remote processor core. The remaining carveout is used for the remote core external memory (program code, data, etc).

Note

The resource table entity (which describes the system resources needed by the remote processor) needs to be at the beginning of the remote processor external memory section.

Sizes and locations for DMA memory carveouts might be updated, e.g. when custom firmware are used. For details on how to adjust the sizes and locations of the remote core pool carveouts, please see section Changing the Memory Map.

3.5.5. Changing the Memory Map

The address and size of the DMA memory carveouts needs to match with the MCU M4F and R5F external memory section sizes in their respective linker mapfiles.

arch/arm64/boot/dts/ti/k3-j721s2-som-p0.dtsi

reserved_memory: reserved-memory {
        #address-cells = <2>;
        #size-cells = <2>;
        ranges;

        secure_ddr: optee@9e800000 {
                reg = <0x00 0x9e800000 0x00 0x01800000>;
                alignment = <0x1000>;
                no-map;
        };

        mcu_r5fss0_core0_dma_memory_region: r5f-dma-memory@a0000000 {
                compatible = "shared-dma-pool";
                reg = <0x00 0xa0000000 0x00 0x100000>;
                no-map;
        };

        mcu_r5fss0_core0_memory_region: r5f-memory@a0100000 {
                compatible = "shared-dma-pool";
                reg = <0x00 0xa0100000 0x00 0xf00000>;
                no-map;
        };

        mcu_r5fss0_core1_dma_memory_region: r5f-dma-memory@a1000000 {
                compatible = "shared-dma-pool";
                reg = <0x00 0xa1000000 0x00 0x100000>;
                no-map;
        };

        mcu_r5fss0_core1_memory_region: r5f-memory@a1100000 {
                compatible = "shared-dma-pool";
                reg = <0x00 0xa1100000 0x00 0xf00000>;
                no-map;
        };

        main_r5fss0_core0_dma_memory_region: r5f-dma-memory@a2000000 {
                compatible = "shared-dma-pool";
                reg = <0x00 0xa2000000 0x00 0x100000>;
                no-map;
        };

        main_r5fss0_core0_memory_region: r5f-memory@a2100000 {
                compatible = "shared-dma-pool";
                reg = <0x00 0xa2100000 0x00 0xf00000>;
                no-map;
        };

        main_r5fss0_core1_dma_memory_region: r5f-dma-memory@a3000000 {
                compatible = "shared-dma-pool";
                reg = <0x00 0xa3000000 0x00 0x100000>;
                no-map;
        };

        main_r5fss0_core1_memory_region: r5f-memory@a3100000 {
                compatible = "shared-dma-pool";
                reg = <0x00 0xa3100000 0x00 0xf00000>;
                no-map;
        };

        main_r5fss1_core0_dma_memory_region: r5f-dma-memory@a4000000 {
                compatible = "shared-dma-pool";
                reg = <0x00 0xa4000000 0x00 0x100000>;
                no-map;
        };

        main_r5fss1_core0_memory_region: r5f-memory@a4100000 {
                compatible = "shared-dma-pool";
                reg = <0x00 0xa4100000 0x00 0xf00000>;
                no-map;
        };

        main_r5fss1_core1_dma_memory_region: r5f-dma-memory@a5000000 {
                compatible = "shared-dma-pool";
                reg = <0x00 0xa5000000 0x00 0x100000>;
                no-map;
        };

        main_r5fss1_core1_memory_region: r5f-memory@a5100000 {
                compatible = "shared-dma-pool";
                reg = <0x00 0xa5100000 0x00 0xf00000>;
                no-map;
        };

        c71_0_dma_memory_region: c71-dma-memory@a6000000 {
                compatible = "shared-dma-pool";
                reg = <0x00 0xa6000000 0x00 0x100000>;
                no-map;
        };

        c71_0_memory_region: c71-memory@a6100000 {
                compatible = "shared-dma-pool";
                reg = <0x00 0xa6100000 0x00 0xf00000>;
                no-map;
        };

        c71_1_dma_memory_region: c71-dma-memory@a7000000 {
                compatible = "shared-dma-pool";
                reg = <0x00 0xa7000000 0x00 0x100000>;
                no-map;
        };

        c71_1_memory_region: c71-memory@a7100000 {
                compatible = "shared-dma-pool";
                reg = <0x00 0xa7100000 0x00 0xf00000>;
                no-map;
        };

        rtos_ipc_memory_region: ipc-memories@a8000000 {
                reg = <0x00 0xa8000000 0x00 0x01c00000>;
                alignment = <0x1000>;
                no-map;
        };
};

Warning

Be careful not to overlap carveouts!

Note

The reserved memory sizes listed above are provided as a reference only and subject to change between releases. For latest memory reservations, please refer to the kernel device tree repository : https://git.ti.com/cgit/ti-linux-kernel/ti-linux-kernel/tree/arch/arm64/boot/dts/ti/k3-j721s2-som-p0.dtsi?h=ti-linux-6.1.y

3.5.6. RPMsg Char Driver

The below picture depicts the kernel driver components and the user space device model for using RPMsg Char driver for communicating with the remote processor.

../_images/RPMsgstack-linux.png

The RPMsg char driver exposes RPMsg endpoints to user-space processes. Multiple user-space applications can use one RPMsg device uniquely by requesting different interactions with the remote service. The RPMsg char driver supports the creation of multiple endpoints for each probed RPMsg char device, enabling the use of the same device for different instances.

RPMsg devices

Each created endpoint device shows up as a single character device in /dev.

The RPMsg bus sits on top of the VirtIO bus. Each virtio name service announcement message creates a new RPMsg device, which is supposed to bind to a RPMsg driver. RPMsg devices are created dynamically:

The remote processor announces the existence of a remote RPMsg service by sending a name service announcement message containing the name of the service (i.e. name of the device), source and destination addresses. The message is handled by the RPMsg bus, which dynamically creates and registers an RPMsg device which represents the remote service. As soon as a relevant RPMsg driver is registered, it is immediately probed by the bus and the two sides can start exchanging messages.

The control interface

The RPMsg char driver provides control interface (in the form of a character device under /dev/rpmsg_ctrlX) allowing user-space to export an endpoint interface for each exposed endpoint. The control interface provides a dedicated ioctl to create an endpoint device.

3.5.7. ti-rpmsg-char library

The ti-rpmsg-char package is located at the ti-rpmsg-char git repo <https://git.ti.com/cgit/rpmsg/ti-rpmsg-char>.

A thin userspace rpmsg char library is provided. The library abstracts the rpmsg char driver usage from userspace. This library provides an easy means to identify and open rpmsg character devices created by the kernel rpmsg-char driver.

This library support TI K3 family of devices (i.e AM65x, AM64x, AM62x, AM62Ax, J784S4, J721S2, J721E, and J7200 SoCs).

The library provides 4 basic APIs wrapping all the rpmsg char driver calls. Please check documentation in ‘include/ti_rpmsg_char.h’ for details..

rpmsg_char_init()

This function checks that the needed kernel drivers (remoteproc. rpmsg, virtio) are installed and accessible from the user space. Further it also checks the SoC device supports the requested remote processor.

rpmsg_char_exit()

This function finalizes and performs all the de-initialization and any cleanup on the library. This is the last function that needs to be invoked after all usage is done as part of the application’s cleanup. This only need to be invoked once in an application, there is no reference counting. The function also needs to be invoked in any application’s signal handlers to perform the necessary cleanup of stale rpmsg endpoint devices.

rpmsg_char_open()

Function to create and access a rpmsg endpoint device for a given rpmsg device.

rpmsg_char_close()

Function to close and delete a previously created local endpoint

All remote proc ids are defined in rproc_id.h

The below table lists the device enumerations as defined in the rpmsg_char_library. The validity of the enumerations wrt J721S2 is also specified.

+------------------+--------------------+---------+-----------------------------------+
| Enumeration ID   | Device Name        | Valid   | Description                       |
+==================+====================+=========+===================================+
| R5F_MAIN0_0      | 5c00000.r5f        | Yes     | R5F core in Main Domain           |
+------------------+--------------------+---------+-----------------------------------+
| R5F_MAIN0_1      | 5d00000.r5f        | Yes     | R5F core in Main Domain           |
+------------------+--------------------+---------+-----------------------------------+
| R5F_MAIN1_0      | 5e00000.r5f        | Yes     | R5F core in Main Domain           |
+------------------+--------------------+---------+-----------------------------------+
| R5F_MAIN1_1      | 5f00000.r5f        | Yes     | R5F core in Main Domain           |
+------------------+--------------------+---------+-----------------------------------+
| R5F_MCU0_0       | 41000000.r5f       | Yes     | R5F core in MCU Domain            |
+------------------+--------------------+---------+-----------------------------------+
| R5F_MCU0_1       | 41400000.r5f       | Yes     | R5F core in MCU Domain            |
+------------------+--------------------+---------+-----------------------------------+
| DSP_c71_0        | 64800000.dsp       | Yes     | DSP core in Main Domain           |
+------------------+--------------------+---------+-----------------------------------+
| DSP_c71_0        | 65800000.dsp       | Yes     | DSP core in Main Domain           |
+------------------+--------------------+---------+-----------------------------------+

3.5.8. RPMsg examples:

RPMsg user space example

Note

These steps were tested on Ubuntu 18.04. Later versions of Ubuntu may need different steps

Note

rpmsg_char_simple comes prepackaged in prebuilt SDK wic images (e.g. tisdk-default-image-j721s2-evm.wic.xz) that comes with the release and below example can be directly run (Step 6) if using the prebuilt wic images

Access source code in the git repo here. rproc_id is defined at include/rproc_id.h.

Build the Linux Userspace example for Linux RPMsg by following the steps in the top-level README:

  1. Download the git repo

  2. Install GNU autoconf, GNU automake, GNU libtool, and v8 compiler as per the README

  3. Perform the Build Steps as per the README

Linux RPMsg can be tested with prebuilt binaries that are packaged in the SDK wic image filesystem:

  1. Copy the Linux RPMsg Userspace application from <ti-rpmsg-char_repo>/examples/rpmsg_char_simple into the board’s Linux filesystem.

  2. Ensure that the remote core symbolic link points to the desired binary file in /lib/firmware/ti-ipc/j7xx/. Update the symbolic link if needed. Reference section Booting Remote Cores from Linux console/User space for more information.

  3. Run the example on the board:

Usage: rpmsg_char_simple [-r <rproc_id>] [-n <num_msgs>] [-d <rpmsg_dev_name] [-p <remote_endpt]
    Defaults: rproc_id: 0 num_msgs: 100 rpmsg_dev_name: NULL remote_endpt: 14

For remote proc ids, please refer to : 'https://git.ti.com/cgit/rpmsg/ti-rpmsg-char/tree/include/rproc_id.h'
# MCU R5F<->A72_0 IPC
root@j721s2-evm:~# rpmsg_char_simple -r0 -n10
Created endpt device rpmsg-char-0-1100, fd = 3 port = 1024
Exchanging 10 messages with rpmsg device ti.ipc4.ping-pong on rproc id 0 ...

Sending message #0: hello there 0!
Receiving message #0: hello there 0!
Sending message #1: hello there 1!
Receiving message #1: hello there 1!
Sending message #2: hello there 2!
Receiving message #2: hello there 2!
Sending message #3: hello there 3!
Receiving message #3: hello there 3!
Sending message #4: hello there 4!
Receiving message #4: hello there 4!
Sending message #5: hello there 5!
Receiving message #5: hello there 5!
Sending message #6: hello there 6!
Receiving message #6: hello there 6!
Sending message #7: hello there 7!
Receiving message #7: hello there 7!
Sending message #8: hello there 8!
Receiving message #8: hello there 8!
Sending message #9: hello there 9!
Receiving message #9: hello there 9!

Communicated 10 messages successfully on rpmsg-char-0-1100

TEST STATUS: PASSED

# C7x<->A72_0 IPC
root@j721s2-evm:~# rpmsg_char_simple -r8 -n10
Created endpt device rpmsg-char-8-1107, fd = 3 port = 1024
Exchanging 10 messages with rpmsg device ti.ipc4.ping-pong on rproc id 8 ...

Sending message #0: hello there 0!
Receiving message #0: hello there 0!
Sending message #1: hello there 1!
Receiving message #1: hello there 1!
Sending message #2: hello there 2!
Receiving message #2: hello there 2!
Sending message #3: hello there 3!
Receiving message #3: hello there 3!
Sending message #4: hello there 4!
Receiving message #4: hello there 4!
Sending message #5: hello there 5!
Receiving message #5: hello there 5!
Sending message #6: hello there 6!
Receiving message #6: hello there 6!
Sending message #7: hello there 7!
Receiving message #7: hello there 7!
Sending message #8: hello there 8!
Receiving message #8: hello there 8!
Sending message #9: hello there 9!
Receiving message #9: hello there 9!

Communicated 10 messages successfully on rpmsg-char-8-1107

TEST STATUS: PASSED

RPMsg kernel space example

The kernel space example is in the Linux Processor SDK under samples/rpmsg/rpmsg_client_sample.c

Build the kernel module rpmsg_client_sample:

Note

rpmsg_client_sample comes prepackaged in prebuilt SDK wic images (e.g. tisdk-default-image-j721s2-evm.wic.xz) that comes with the release and below example can be directly run (Step 5) if using the prebuilt wic images

  1. Set up the kernel config to build the rpmsg client sample.

Use menuconfig to verify Kernel hacking > Sample kernel code > Build rpmsg client sample is M:

$ make ARCH=arm64 CROSS_COMPILE=aarch64-none-linux-gnu- distclean
$ make ARCH=arm64 CROSS_COMPILE=aarch64-none-linux-gnu- defconfig ti_arm64_prune.config
$ make ARCH=arm64 CROSS_COMPILE=aarch64-none-linux-gnu- menuconfig
Symbol: SAMPLE_RPMSG_CLIENT [=m]
  │ Type  : tristate
  │ Defined at samples/Kconfig:116
  │   Prompt: Build rpmsg client sample -- loadable modules only
  │   Depends on: SAMPLES [=y] && RPMSG [=y] && m && MODULES [=y]
  │   Location:
  │     -> Kernel hacking
  │       -> Sample kernel code (SAMPLES [=y])
  │ (1)     -> Build rpmsg client sample -- loadable modules only (SAMPLE_RPMSG_CLIENT [=m])
  1. Make the kernel and modules. Multithreading with –j is optional:

$ make ARCH=arm64 CROSS_COMPILE=aarch64-none-linux-gnu- -j8

Linux RPMsg can be tested with prebuilt binaries that are packaged in the SDK wic image filesystem:

  1. Copy the Linux RPMsg kernel driver from <Linux_SDK>/board-support/linux-x.x.x/samples/rpmsg/rpmsg_client_sample.ko into the board’s Linux filesystem.

  2. Ensure that the remote core symbolic link points to the desired binary file in /lib/firmware/ti-ipc/j7xx/. Update the symbolic link if needed. Reference section Booting Remote Cores from Linux console/User space for more information.

  3. Run the example on the board:

root@j721s2-evm:~# modprobe rpmsg_client_sample count=10
[ 4736.351359] rpmsg_client_sample virtio1.ti.ipc4.ping-pong.-1.13: new channel: 0x400 -> 0xd!
[ 4736.359820] rpmsg_client_sample virtio1.ti.ipc4.ping-pong.-1.13: incoming msg 1 (src: 0xd)
[ 4736.363653] rpmsg_client_sample virtio2.ti.ipc4.ping-pong.-1.13: new channel: 0x400 -> 0xd!
[ 4736.369308] rpmsg_client_sample virtio1.ti.ipc4.ping-pong.-1.13: incoming msg 2 (src: 0xd)
[ 4736.377884] rpmsg_client_sample virtio3.ti.ipc4.ping-pong.-1.13: new channel: 0x400 -> 0xd!
[ 4736.385918] rpmsg_client_sample virtio2.ti.ipc4.ping-pong.-1.13: incoming msg 1 (src: 0xd)
[ 4736.394413] rpmsg_client_sample virtio4.ti.ipc4.ping-pong.-1.13: new channel: 0x400 -> 0xd!
[ 4736.402221] rpmsg_client_sample virtio1.ti.ipc4.ping-pong.-1.13: incoming msg 3 (src: 0xd)
[ 4736.411169] rpmsg_client_sample virtio5.ti.ipc4.ping-pong.-1.13: new channel: 0x400 -> 0xd!
[ 4736.418692] rpmsg_client_sample virtio3.ti.ipc4.ping-pong.-1.13: incoming msg 1 (src: 0xd)
[ 4736.427660] rpmsg_client_sample virtio6.ti.ipc4.ping-pong.-1.13: new channel: 0x400 -> 0xd!
[ 4736.435380] rpmsg_client_sample virtio2.ti.ipc4.ping-pong.-1.13: incoming msg 2 (src: 0xd)
[ 4736.444215] rpmsg_client_sample virtio0.ti.ipc4.ping-pong.-1.13: new channel: 0x400 -> 0xd!
[ 4736.451872] rpmsg_client_sample virtio4.ti.ipc4.ping-pong.-1.13: incoming msg 1 (src: 0xd)
[ 4736.468492] rpmsg_client_sample virtio1.ti.ipc4.ping-pong.-1.13: incoming msg 4 (src: 0xd)
[ 4736.477922] rpmsg_client_sample virtio5.ti.ipc4.ping-pong.-1.13: incoming msg 1 (src: 0xd)
[ 4736.486199] rpmsg_client_sample virtio3.ti.ipc4.ping-pong.-1.13: incoming msg 2 (src: 0xd)
[ 4736.494466] rpmsg_client_sample virtio6.ti.ipc4.ping-pong.-1.13: incoming msg 1 (src: 0xd)
[ 4736.502735] rpmsg_client_sample virtio2.ti.ipc4.ping-pong.-1.13: incoming msg 3 (src: 0xd)
[ 4736.511006] rpmsg_client_sample virtio0.ti.ipc4.ping-pong.-1.13: incoming msg 1 (src: 0xd)
[ 4736.519275] rpmsg_client_sample virtio4.ti.ipc4.ping-pong.-1.13: incoming msg 2 (src: 0xd)
[ 4736.527548] rpmsg_client_sample virtio1.ti.ipc4.ping-pong.-1.13: incoming msg 5 (src: 0xd)
[ 4736.535812] rpmsg_client_sample virtio5.ti.ipc4.ping-pong.-1.13: incoming msg 2 (src: 0xd)
[ 4736.544072] rpmsg_client_sample virtio3.ti.ipc4.ping-pong.-1.13: incoming msg 3 (src: 0xd)
[ 4736.552335] rpmsg_client_sample virtio6.ti.ipc4.ping-pong.-1.13: incoming msg 2 (src: 0xd)
[ 4736.560605] rpmsg_client_sample virtio2.ti.ipc4.ping-pong.-1.13: incoming msg 4 (src: 0xd)
[ 4736.568869] rpmsg_client_sample virtio0.ti.ipc4.ping-pong.-1.13: incoming msg 2 (src: 0xd)
[ 4736.577130] rpmsg_client_sample virtio4.ti.ipc4.ping-pong.-1.13: incoming msg 3 (src: 0xd)
[ 4736.585401] rpmsg_client_sample virtio1.ti.ipc4.ping-pong.-1.13: incoming msg 6 (src: 0xd)
[ 4736.593670] rpmsg_client_sample virtio5.ti.ipc4.ping-pong.-1.13: incoming msg 3 (src: 0xd)
[ 4736.601934] rpmsg_client_sample virtio3.ti.ipc4.ping-pong.-1.13: incoming msg 4 (src: 0xd)
[ 4736.610196] rpmsg_client_sample virtio6.ti.ipc4.ping-pong.-1.13: incoming msg 3 (src: 0xd)
[ 4736.618461] rpmsg_client_sample virtio2.ti.ipc4.ping-pong.-1.13: incoming msg 5 (src: 0xd)
[ 4736.626721] rpmsg_client_sample virtio0.ti.ipc4.ping-pong.-1.13: incoming msg 3 (src: 0xd)
[ 4736.634985] rpmsg_client_sample virtio4.ti.ipc4.ping-pong.-1.13: incoming msg 4 (src: 0xd)
[ 4736.643279] rpmsg_client_sample virtio1.ti.ipc4.ping-pong.-1.13: incoming msg 7 (src: 0xd)
[ 4736.651569] rpmsg_client_sample virtio5.ti.ipc4.ping-pong.-1.13: incoming msg 4 (src: 0xd)
[ 4736.659839] rpmsg_client_sample virtio3.ti.ipc4.ping-pong.-1.13: incoming msg 5 (src: 0xd)
[ 4736.668110] rpmsg_client_sample virtio6.ti.ipc4.ping-pong.-1.13: incoming msg 4 (src: 0xd)
[ 4736.676376] rpmsg_client_sample virtio2.ti.ipc4.ping-pong.-1.13: incoming msg 6 (src: 0xd)
[ 4736.684643] rpmsg_client_sample virtio0.ti.ipc4.ping-pong.-1.13: incoming msg 4 (src: 0xd)
[ 4736.692907] rpmsg_client_sample virtio4.ti.ipc4.ping-pong.-1.13: incoming msg 5 (src: 0xd)
[ 4736.701173] rpmsg_client_sample virtio1.ti.ipc4.ping-pong.-1.13: incoming msg 8 (src: 0xd)
[ 4736.709439] rpmsg_client_sample virtio5.ti.ipc4.ping-pong.-1.13: incoming msg 5 (src: 0xd)
[ 4736.717702] rpmsg_client_sample virtio3.ti.ipc4.ping-pong.-1.13: incoming msg 6 (src: 0xd)
[ 4736.725964] rpmsg_client_sample virtio6.ti.ipc4.ping-pong.-1.13: incoming msg 5 (src: 0xd)
[ 4736.734228] rpmsg_client_sample virtio2.ti.ipc4.ping-pong.-1.13: incoming msg 7 (src: 0xd)
[ 4736.742488] rpmsg_client_sample virtio0.ti.ipc4.ping-pong.-1.13: incoming msg 5 (src: 0xd)
[ 4736.750753] rpmsg_client_sample virtio4.ti.ipc4.ping-pong.-1.13: incoming msg 6 (src: 0xd)
[ 4736.759015] rpmsg_client_sample virtio1.ti.ipc4.ping-pong.-1.13: incoming msg 9 (src: 0xd)
[ 4736.767284] rpmsg_client_sample virtio5.ti.ipc4.ping-pong.-1.13: incoming msg 6 (src: 0xd)
[ 4736.775553] rpmsg_client_sample virtio3.ti.ipc4.ping-pong.-1.13: incoming msg 7 (src: 0xd)
[ 4736.783820] rpmsg_client_sample virtio6.ti.ipc4.ping-pong.-1.13: incoming msg 6 (src: 0xd)
[ 4736.792092] rpmsg_client_sample virtio2.ti.ipc4.ping-pong.-1.13: incoming msg 8 (src: 0xd)
[ 4736.800356] rpmsg_client_sample virtio0.ti.ipc4.ping-pong.-1.13: incoming msg 6 (src: 0xd)
[ 4736.808615] rpmsg_client_sample virtio4.ti.ipc4.ping-pong.-1.13: incoming msg 7 (src: 0xd)
[ 4736.816879] rpmsg_client_sample virtio1.ti.ipc4.ping-pong.-1.13: incoming msg 10 (src: 0xd)
[ 4736.825218] rpmsg_client_sample virtio1.ti.ipc4.ping-pong.-1.13: goodbye!
[ 4736.831999] rpmsg_client_sample virtio5.ti.ipc4.ping-pong.-1.13: incoming msg 7 (src: 0xd)
[ 4736.840267] rpmsg_client_sample virtio3.ti.ipc4.ping-pong.-1.13: incoming msg 8 (src: 0xd)
[ 4736.848538] rpmsg_client_sample virtio6.ti.ipc4.ping-pong.-1.13: incoming msg 7 (src: 0xd)
[ 4736.856803] rpmsg_client_sample virtio2.ti.ipc4.ping-pong.-1.13: incoming msg 9 (src: 0xd)
[ 4736.865068] rpmsg_client_sample virtio0.ti.ipc4.ping-pong.-1.13: incoming msg 7 (src: 0xd)
[ 4736.873331] rpmsg_client_sample virtio4.ti.ipc4.ping-pong.-1.13: incoming msg 8 (src: 0xd)
[ 4736.881595] rpmsg_client_sample virtio5.ti.ipc4.ping-pong.-1.13: incoming msg 8 (src: 0xd)
[ 4736.889855] rpmsg_client_sample virtio3.ti.ipc4.ping-pong.-1.13: incoming msg 9 (src: 0xd)
[ 4736.898121] rpmsg_client_sample virtio6.ti.ipc4.ping-pong.-1.13: incoming msg 8 (src: 0xd)
[ 4736.906382] rpmsg_client_sample virtio2.ti.ipc4.ping-pong.-1.13: incoming msg 10 (src: 0xd)
[ 4736.914723] rpmsg_client_sample virtio2.ti.ipc4.ping-pong.-1.13: goodbye!
[ 4736.921503] rpmsg_client_sample virtio0.ti.ipc4.ping-pong.-1.13: incoming msg 8 (src: 0xd)
[ 4736.929764] rpmsg_client_sample virtio4.ti.ipc4.ping-pong.-1.13: incoming msg 9 (src: 0xd)
[ 4736.938024] rpmsg_client_sample virtio5.ti.ipc4.ping-pong.-1.13: incoming msg 9 (src: 0xd)
[ 4736.946289] rpmsg_client_sample virtio3.ti.ipc4.ping-pong.-1.13: incoming msg 10 (src: 0xd)
[ 4736.954635] rpmsg_client_sample virtio3.ti.ipc4.ping-pong.-1.13: goodbye!
[ 4736.961422] rpmsg_client_sample virtio6.ti.ipc4.ping-pong.-1.13: incoming msg 9 (src: 0xd)
[ 4736.969680] rpmsg_client_sample virtio0.ti.ipc4.ping-pong.-1.13: incoming msg 9 (src: 0xd)
[ 4736.977942] rpmsg_client_sample virtio4.ti.ipc4.ping-pong.-1.13: incoming msg 10 (src: 0xd)
[ 4736.986279] rpmsg_client_sample virtio4.ti.ipc4.ping-pong.-1.13: goodbye!
[ 4736.993058] rpmsg_client_sample virtio5.ti.ipc4.ping-pong.-1.13: incoming msg 10 (src: 0xd)
[ 4737.001392] rpmsg_client_sample virtio5.ti.ipc4.ping-pong.-1.13: goodbye!
[ 4737.008180] rpmsg_client_sample virtio6.ti.ipc4.ping-pong.-1.13: incoming msg 10 (src: 0xd)
[ 4737.016515] rpmsg_client_sample virtio6.ti.ipc4.ping-pong.-1.13: goodbye!
[ 4737.023296] rpmsg_client_sample virtio0.ti.ipc4.ping-pong.-1.13: incoming msg 10 (src: 0xd)
[ 4737.031630] rpmsg_client_sample virtio0.ti.ipc4.ping-pong.-1.13: goodbye!