TIOVX User Guide
TIOVX Safety Recommendations

TIOVX Initialization in Safety Systems

TI provides sample host side application code along with target side sample firmware within the "vision_apps" project (included with the PSDK RTOS). This serves as an example for how the initializations should be done within a system integrating TIOVX. There are several modules within the "app_utils" project which require initialization in order for TIOVX to work properly, including such items as shared memory, IPC, Sciclient, etc.

For host side initialization of TIOVX, the appInit function within vision_apps/utils/app_init shall be called. This function calls the appCommonInit, tivxInit and tivxHostInit calls. The appCommonInit call performs the necessary host side initializations outside of TIOVX, including the set up of shared memory, IPC, timers as well as some optional initializations such as logging, performance measurement, etc. This function is supported on both Linux and QNX, with the only difference across the two is the resource table usage on Linux (not supported on QNX). The API also ensures that these initializations are only performed once per system, in the case of multi-threaded or multi-process designs.

For target side initialization of TIOVX, the example initialization of firmware occur within the vision_apps/platform/<soc>/rtos/common/app_init.c. This file is common across all remote cores of the system and has modules enabled/disabled based on the settings in vision_apps/platform/<soc>/rtos/common/app_cfg*.h. In order for TIOVX to be properly enabled, the ENABLE_TIOVX, ENABLE_IPC and ENABLE_SCICLIENT must be set. Additionally, depending on the kernels enabled in the system, certain memory sections may also be required if they are depended on by the respective kernels.

Within the ENABLE_TIOVX macro, the key initialization call in order for TIOVX to be enabled properly is tivxInit.

Furthermore, specific memory region carveouts are required in order for TIOVX to work properly. More details can be found in the "Understanding and updating SDK memory map for <SOC>" document in the PSDK RTOS top level developer notes.

One specific API which is of interest in the set up of these memory regions is the tivxPlatformResetObjDescTableInfo function. This function resets the object descriptor table at init time. This function is called solely on the remote core side since the RTOS cores init prior to the A-core. This function shall not be called from the A-core host side or separate from init time, otherwise it could cause undefined behavior.

In order to facilitate the adherence to the alignment and size requirements of these regions in the case of custom memory maps, build asserts have been added to the TIOVX code in order to catch issues at the build stage of TIOVX.

The vision_apps project also provides the "vision_apps_init.sh" script which enables remote core logging on the firmware. This allows a developer to see if there were any initialization or run-time error logs from the firmware. Furthermore, the vision_apps firmware performs a ping test once all the required firmwares have been initialized successfully, helping to identify issues during initialization from the remote firmware. If this ping test does not succeed, it does not allow further processing to occur on that remote core, as it may be in a bad state. Refer to the vision_apps user guide for more information about this diagnostics for help identifying issues in the remote firmware.

TIOVX Usage Recommendations for Safety

When utilizing TIOVX within a safety system, there are a few things to note regarding the implementation of various API's.

Delay Object Support

The vx_delay objects have limited support within TIOVX, particularly when using them within pipelining. Thus, it is not recommended to use vx_delay objects with pipelining. The section Pipelining with Delay Objects provides more details on this limitation and alternative means by which to implement similar functionality.

Composite Object Pipelining Support

Similarly, composite objects such as vx_object_array and vx_pyramid have certain limitations with pipelining. The section Pipelining with Composite Objects provides more details on this limitation and alternative means by which to implement similar functionality.

Safety Implications of TIOVX Map and Copy API Implementation

Per the section Map and Copy API Usage in TIOVX, there are some safety implications for the map and copy API's implementations within TIOVX. Given that these API's allocate memory prior to graph verification, the application developer must by cognizant of this fact when creating the TIOVX-based application. In particular, if using data objects as exemplars or other such elements disconnected from the graph, it is possible that the available memory of the system can be reached prior to the call to vxVerifyGraph and thus should be taken into account when designing the system.

Nested Node Safety Usage Information

Nested nodes provide the ability to use the functionality of a vx_graph within a vx_node. However, graph event handling within a nested node is not yet handled. Therefore, in the meantime, it is recommended that from the application point of view, a nested node should be registered with a given timeout and that if that timeout is exceeded, the cores which are used within the nested node need to be rebooted.

Further information

For full details, please reference the TIOVX usage sections at the location TIOVX Usage

TIOVX Memory Management for Safety

A critical component of safety SW systems is the memory management scheme. Please reference the section Memory Management in TIOVX for details on how memory management is handled in TIOVX and how it facilitates safety.

Requirements for Application Usage of OpenVX data objects

OpenVX has a clear “initialization” phase, “run-time” phase, and “deinit” phase for each graph. The initialization phase for a graph ends when the call to vxVerifyGraph returns. With one specific case exception listed below, all of the resources are initialized, and at no point in the run-time phase the framework or kernels allocate memory. (Note that it is possible that the application itself may still allocate memory as in the case of the control callback objects discussed below.)

The exception to this case is if the application wants to call control commands which require it to create additional OpenVX references which are not already node parameters and thus allocated as a part of the verify graph call. Due to this fact, an application should ideally create, map and unmap these data references prior to the call to vxVerifyGraph. This allows the data references to be allocated and thus the full system memory will be considered when returning a status from the graph verification.

If an application initializes all of the graphs it expects to run prior to running, it is guaranteed that it has all the memory resources reserved and will not run into an out of memory condition or memory fragmentation, which is the primary rationale for this rule. In the case of having multiple graphs spanning multiple threads or processes, the create and verification of all graphs in a system shall be called prior to the process phase of any graph in the system. This is the recommended approach when using OpenVX.

In TIOVX, all framework and data objects “created” in the init phase reserve statically allocated slots in a specific global memory array of objects, and max values that define the length of this array are defined statically at build time using the respective SoC tivx_config.h. During development, if a use case exceeds the max allocation from these lists, then a run-time terminal error print which says which value to increase in this file, and the user can rebuild and run again. In order to ensure no memory is wasted, we have a feature where the user can initialize their application to be used in production and run a function which creates an updated version of this file which sets all the max values to the peak usage at the time that the function was called. This way the user can recompile the framework using this new header file and only allocate the memory required for this application use case.

For data buffers, there is a specific contiguous shared memory carveout that data buffers get allocated out of during the vxVerifyGraph function call. Since these are all done during initialization time, then once the vxVerifyGraph function returns, that graph will never experience an out of memory or memory fragmentation issue.

When designing applications, the application shall not selectively delete graphs or memory associated with OpenVX objects. Rather, it should persist throughout the duration of the application. The reason for this is that selective deletion and re-creation of various OpenVX objects can lead to memory fragmentation.

Requirement for object descriptor table

As mentioned above and in the Memory Management in TIOVX documentation, TIOVX uses a table of object descriptors in non-cached memory which are exchanged across nodes in order to access data buffers. Upon firmware boot, this table is reset by the remote core firmware and not on the host side. Furthermore, this table is modified by the framework when new object descriptors are populated or removed from the table.

From the application side, it is necessary to avoid applications writing into this memory, as this will corrupt the object descriptors and therefore could result in invalid reads or writes from remote cores.

Note about Physical Addresses

As a note to application developers, physical addresses are used in multiple places within user space (for instance, with the API tivxMemTranslateVirtAddr). This is important to note, as a misbehaving application could corrupt this value and cause crashes on remote cores. Care should be taken to avoid corrupting these values within the application.

Requirements for Memory Allocated Outside of TIOVX Framework

Certain API's allow an OpenVX data object to be associated with memory allocated from outside of the framework. There are several important constraints for such memory. There are a few API's in question which are explained further below.

Requirements of vxCreateImageFromHandle and vxSwapImageHandle

Both the vxCreateImageFromHandle and vxSwapImageHandle API allow for the importing of memory which may or may not have been allocated using the TIOVX framework to a vx_image object. In order to avoid errors, the memory which is being imported to these objects are required to be allocated using the tivxMemAlloc API even though there are no explicit error checks in the framework for this requirement.

Requirements of tivxReferenceImportHandle

The tivxReferenceImportHandle API has several important restrictions in how it is to be used within TIOVX. The API guide gives details as to the requirements of the imported handle, which must be adhered to. In particular, there are a few important aspects of the imported handles that need to be reviewed below:

  • OpenVX data type requirements. The only provided list of data types are valid.
  • Memory region requirements. An error is thrown if the memory is not created from the region specified
  • Memory alignment requirements. While there is not a check for this given that it is simply a memory address, the API is required to be used for memory allocation will automatically align the memory to the required alignment
  • There is an error thrown if the corresponding number of entries doesn't match a set of number of valid addresses. If the total number of memory pointers are not equal to the number of pointers required for the reference, then an error will be thrown.
  • Subimages of a given image object will not be imported to the subsequent imported image object.

For more information about how to use this API, please refer to the Producer/Consumer application within vision_apps as well as the test cases found at tiovx/conformance_tests/test_tiovx/test_tivxMem.c

TIOVX Resource Teardown

For applications created using TIOVX as the middleware, the resource teardown shall be considered in the development of the applications. While TIOVX provides API's to release references that have previously been created, this logic must be called in the case that an event is received at the application level which causes an abort. Signal handler logic should handle the teardown of OpenVX data objects that had previously been created. The implementation of this logic will depend on the OS used by the OpenVX host core.

In order to understand how this shall be implemented within an OpenVX-based application with the OpenVX host running on a POSIX-based OS, please reference the "File Descriptor Exchange across Processes" application found within the vision apps package of the PSDK RTOS. This application registers a signal handler which is executed upon a Ctrl-C signal. This is done using the code snippet below:

signal(SIGINT, App_intSigHandler);

Within the App_intSigHandler, the application calls App_deInit, which calls the teardown logic associated with all of the OpenVX objects contained within the application.

For information about ensuring that all resources have been freed appropriately, please reference TIOVX Safety Tooling.

TIOVX Spinlock Usage and Recommendations

There are a few different scenarios in which a spinlock is required to be used by TIOVX in order to provide exclusive access amongst the multiple cores which may require access to a given piece of information. The 3 scenarios are listed below along with the spinlock ID which is used for that scenario:

There is no resource manager for spinlocks within the SDK. Therefore, it is important for an application developer to guarantee that no other piece of software assumes access to these locks. If other software components are using these locks, it will cause significant delays in execution of TIOVX.

TIOVX IPC Implementation

The remote core IPC utils uses a locally set endpoint number which it uses to communicate amongst HLOS and other RTOS cores. Therefore, the RTOS remote cores are trusting the HLOS to use the proper endpoint; otherwise, the communication may be sent to the wrong endpoint. If an application uses the default IPC utils, this is already taken care of, but if an application uses some other means of establishing IPC across cores, issues could arise if this fact is not considered.