TIOVX User Guide
|
#include <TI/tivx.h>
#include <tivx_utils_file_rd_wr.h>
#include <stdio.h>
#include <stdint.h>
#include <stdlib.h>
#include <assert.h>
#include <float.h>
#include <math.h>
#include "../../../common/xdais_types.h"
#include "sTIDL_IOBufDesc.h"
#include "tivx_tidl_utils.h"
#include "itidl_ti.h"
#include "vx_tutorial_tidl.h"
#include "test_engine/test_utils.h"
Go to the source code of this file.
Executes the inference of a deep learning network. It first reads the configuration file 'tidl/tidl_infer.cfg' located in directory test_data and that contains the following information:
processing_core_mode: Specify how the network will be processed if multiple processing cores exist in the system. 0 (default): all cores can be utilized according to each layer's groupID. If a layer's group ID is 1 then it will run on EVE1. If it is 2, it will run on DSP1. 1: The entire network will run on EVE1, even the layers which have group ID 2 (DSP layers). 2: The entire network will run on DSP1, even the layers which have group ID 1 (EVE layers).
All paths are relative to the test_data folder
Using the parameters from the configuration file, vx_tutorial_tidl() will then apply the network model on the input data and display the result on the console window, which consists of the classification top-5 results.
In this tutorial we learn the below concepts:
To include OpenVX interfaces include below file
Follow the comments in the function vx_tutorial_tidl() to understand this tutorial
Definition in file vx_tutorial_tidl.c.