AM62x MCU+ SDK  08.05.00
Understanding inter-processor communication (IPC)

Note
Currently the IPC between A53 running Linux and MCU M4F running RTOS/NORTOS is only supported.

Introduction

AM62X SOC has multiple CPUs on which distinct applications are run. These applications need to communicate with each other to realize the larger system level application. This means of communication is called Inter Processor Communication (IPC).

The section describes the below details related to IPC

  • IPC SW architecture as it spans across different CPUs and operating systems.
  • Steps to enable IPC in your applications running RTOS, NORTOS or Linux

Additional References

See also these additional pages for more details and examples about IPC,

IPC SW Architecture

Shown below is a block diagram of the SW modules involved in IPC,

IPC SW Block Diagram

IPC involves synchronizing between SW running across multiple CPUs. This is achieved by exchanging messages between the CPUs.

IPC Notify and IPC RP Message

There are two APIs to exchange messages between the CPUs

  • IPC RP Message,
    • Here a CPU can send messages as packet buffers to a logical end point or port on another CPU
    • The packet buffers themselves are kept in a "shared memory" which is visible to both the participating CPUs
    • When a packet is put into the shared memory, a CPU needs to interrupt or "notify" the other CPU that there is a new packet to process. This is done using a HW interrupt mechanism.
    • Message packet size
      • Packet size when Linux is one end is fixed to 512 bytes. This is fixed in Linux kernel by default.
      • Minimum packet size when RTOS/NORTOS at both ends is 4 bytes.
      • Max packet size when RTOS/NORTOS at both ends can be defined by end user, though 512 bytes is the max recommended size.
      • Larger packet means larger shared memory to hold the messages.
    • Logical end points can be up to RPMESSAGE_MAX_LOCAL_ENDPT count.
  • IPC Notify
    • Here a CPU simply interrupts or notifies the other CPU using a low level HW interrupt mechanism
    • This allows the IPC notify to be extremely low latency, with some trade off of flexibility offered by RP Message
    • Internally, the RTOS/NORTOS implementation of IPC RP Message uses the IPC Notify API underneath

When using Linux

When using Linux,

  • On the Linux side, IPC RP Message is implemented inside the Linux kernel on top of the HW mailbox driver
  • Applications, typically in user space, can access this RP Message kernel module using the rpmsg_char character driver in user space.
  • Processor SDK Linux provides a user space rpmsg_char library which gives simplified APIs to send and receive messages to other CPUs using RP Message.

Important usage points

Below are some important points to take note of regarding IPC,

  • When Linux is one end of the IPC message exchange, only IPC RP Message can be used.
  • When Linux is one end of the IPC message exchange, the max RP Message packet or buffer size is 512 bytes.

IPC design pattern

Using the basic send and receive IPC APIs, an application writer can design IPC for his application in many different ways. The final choice depends on the end application requirements.

Given below is a typical "design pattern" of using IPC RP Message in "client server" mode,

  • A server CPU typically offers some service, say do some computation or read some sensor,
    • The server creates a RP Message end point
    • An end point is any 16b number, however in our implementation, we constrain it to RPMESSAGE_MAX_LOCAL_ENDPT, to make the implementation fit a low memory footprint and still be performance efficient.
    • An end point is somewhat similar to a port in UDP and CPU ID is somewhat similar to an IP address.
    • Thus given a CPU ID and end point on that CPU, any other CPU can send messages or packets to it.
    • This end point value is known upfront to all CPUs who wish to communicate with it and they also know the nature of service that is offered.
    • The server then waits to receive messages at this end point
    • When it gets a message, the message packet indicates the action to do, typically via a command ID that is part of the packet.
    • The packet also contains, command specific parameters
    • The parameters needs to fit within the packet buffer, if the number of parameters is large or the parameter itself is a large amount of data, then the parameter inside the packet buffer should instead point to another larger shared memory which holds the actual data or additional parameters.
    • As part of the received message, the server also gets to know the sender CPU ID and sender reply end point
    • After the message is processed, the server can then send a "ack" back to the sender including results from the processing.
    • The "ack" itself is simply another message packet and it in turn can have command status and return parameters.
  • A client CPU can send messages to this server end point, as below
    • It creates a RP Message end point to receive "acks". This end point can be any value and need not match the server end point.
    • It calls the send API with the server CPU ID, server end point ID and reply end point ID.
    • The send API includes the packet to send, which is filled with the command to execute and parameters for the command.
    • After sending the packet, it waits for a reply
    • After getting the reply, it processes the reply status and results
  • A server CPU can create multiple end points each offering a logically different service.
  • On the server side, using separate RTOS tasks to wait for received messages on a given end point is a very common design to choose. Though if carefully designed, no-RTOS mode can also be used within a tight processing loop in the main thread.
  • On the sender side, it is common to wait for "ack", however the sender can choose to do something in between while waiting for "ack". "ack" itself can be optional for some commands for example, and is usually agreed between the client and server.

Enabling IPC in applications

Below are the summary of steps a application writer on RTOS/NORTOS needs to do enable IPC for their applications

  • Step 1: Enable IPC RPMessage in SysConfig for the CPUs of interest.
  • Step 2: When IPC with Linux is enabled, sync with Linux during system initialization phase.
  • Step 3: Start using the IPC message passing APIs

We use IPC RP Message Linux Echo example as reference to go through each step in detail. It is recommended to open these projects in CCS and refer to the SysConfig UI for these projects as you read through the instructions below.

Enable IPC in SysConfig

  • Enable IPC via SysConfig, by selecting IPC under TI DRIVERS in the left pane in SysConfig.

IPC SysConfig
  • As only the IPC between A53 running Linux and MCU M4F is supported now, Linux A53 IPC RP Message is enabled by default after adding the IPC. This can not be disabled.

Update linker command file

  • The section .resource_table MUST be placed in the SECTIONS field in the linker command file at an alignment of 4K bytes.
GROUP {
/* This is the resource table used by Linux to know where the IPC "VRINGs" are located */
.resource_table: {} palign(4096)
...
} > DDR

Update MMU/MPU for the CPU

  • The shared memory sections that are put in the linker command file needs to be mapped as NON-CACHE at the RTOS/NORTOS CPUs.
  • This can be done via SysConfig, by adding additional MPU entries using the MPU module under TI DRIVER PORTING LAYER in SysConfig.
  • Once again

Sync with CPUs

  • Sometimes it's useful for the RTOS/NORTOS CPUs to sync with each other and be at a common or well defined point in their initialization sequence. The below API can be used for the same
/* wait for all cores to be ready */
IpcNotify_syncAll(SystemP_WAIT_FOREVER);

Start using the APIs

  • Now you can start sending messages between the enabled CPUs using the APIs defined in IPC RPMessage