3 Stack

This chapter describes the stack functions.

3.1 Packet Buffer Manager: PBM.C

The Packet Buffer Manager (PBM) is charged with managing all the packet buffers in the system. Packet buffers are used by the NDK and device drivers to carry networking packet data. The PBM programming abstraction is discussed in the NDK Programmer’s Reference Guide. This section discusses the implementation provided in the NDK.

3.1.1 Packet Buffer Pool

See the networking examples in your SDK for references on configuring PBM buffers in your system. In some SDKs this is done using SysConfig, in others it’s done by declaring static buffer arrays in a C source file. You can set the number of frames, the frame buffer size, and the memory section where the buffers will be created.

Note that for systems with cache, when the PBM memory is declared, it is placed on a cache aligned boundary. Also note that each packet buffer must be an even number of cache lines in size so that it can be reliably flushed without the risk of conflicting with other buffers.

3.1.2 Packet Buffer Allocation Method

The basic method of buffer allocation is the buffer pool. Buffers are allocated when the PBM_alloc() function is called. This function can be called at interrupt time, so you must ensure only non-blocking calls are made as a result. However, only device drivers can make calls from an ISR and device drivers never ask for a buffer larger than PKT_SIZE_FRAMEBUF. Therefore, the fallback method for allocating larger buffers can technically make blocking calls, although the implementation included in the NDK does not make blocking calls under any circumstance.

The basic method of allocation is to check the size. When the size is less than or equal to PKT_SIZE_FRAMEBUF, then the packet buffer is obtained off the free queue. If there are no free packet buffers on the queue, the function returns NULL. Note that the PBM module could be modified to grow the free pool or use memory allocation as a fallback, but any buffer supplied as a result of a request with the size less than or equal to PKT_SIZE_FRAMEBUF, must adhere to the cache line restrictions outlined in the previous section.

For packet buffers larger than PKT_SIZE_FRAMEBUF, standard memory can be used. These allocation requests are only made for re-assembling large IP packets. The resulting packet cannot be submitted to a hardware device without being fragmented. Therefore, the packet buffer does not need to be compatible for hardware transmission.

3.1.3 Referenced Route Handles

One of the fields in the PBM structure is a referenced handle to a route used to route a packet to its final destination. The PBM module must be aware of this handle when freeing a packet buffer or copying a packet buffer.

When packet buffer is freed by calling PBM_free(), the PBM module must check for a route handle held by the packet buffer, and dereference the handle if it exists. For example:

if( pPkt->hRoute )
{
    RtDeRef( pPkt->hRoute );
    pPkt->hRoute = 0;
}

As noted in the source code to PBM.C, the function RtDeRef() can only be called from kernel mode. However, instead of defining two versions of the PBM_free() function, the PBM module relies on the fact that device drivers are never given packet buffers containing routes. Therefore, any call to PBM_free() where the buffer contains a route, must have been called from within kernel mode. It is, therefore, safe to call RtDeRef().

When a packet buffer is copied with PBM_copy(), all the information about the packet is also copied. This information may include a referenced route handle. If the handle to a route is copied in the process of copying the packet buffer, then a reference to that handle must also be added by calling the RtRef() function. The PBM module does not need to worry about kernel mode for the same reason as it did not with PBM_free().

3.2 Network Interface Manager Unit (NIMU)

The Network Interface Management Unit (NIMU) layer interfaces with the NDK core stack. It enables the stack to control the device at runtime. This layer is platform-independent and is portable across various platforms.

A network interface in this context is analogous to a linux network interface.

3.2.1 NIMU Device Table

The NIMU Device Table is a user defined table enumerating all the Network Interfaces to be used with the NDK. The user has to create this table in their application code or else the application will fail to link with the NDK stack library.

NOTE: If you configure the NDK using the SysConfig tool, the NIMU Device Table will be generated for you by the “NDK Interfaces” module.

The declaration for the table can be found in src/ti/ndk/inc/stack/inc/nimuif.h:

/*********************************************************************
 * DESCRIPTION   :
 *  The NIMUDeviceTable is a NULL terminated array of driver
 *  initialization functions which is called by the NDK Network
 *  Interface Management functions during the NDK Core Initialization.
 *  The table needs to be populated by the driver authors for each
 *  platform to have a list of all driver initialization functions.
 *********************************************************************/
extern NIMU_DEVICE_TABLE_ENTRY  NIMUDeviceTable[];

An example of a NIMU Device Table is shown below:

NIMU_DEVICE_TABLE_ENTRY NIMUDeviceTable[3] =
{
    .init = emac0_init
},
{
    .init = emac1_init
},
{NULL}

Each entry in the table defines the initialization function for a NIMU driver. This initialization function is called by the stack at stack startup time. The .init member is the only recognized member for NIMU Device Table entries.

Many NDK APIs need to reference a Network Interface by its Interface ID. This is the Interface’s index in the Device Table + 1. So the interface with the init function of emac0_init in the example above has the Interface ID of 1.

3.2.2 NIMU Driver Development

In most cases a NIMU driver will already be included in your sdk. If you are interested in writing your own NIMU driver then you can refer to the NIMU Section in the NDK API Reference Guide or the NDK Driver Design Guide