Vision Apps User Guide
Valet Parking Application

Introduction

This application shows a multi-modal approach for generating occupancy grid maps indicating free space versus occupied space around the vehicle. 3D data from Structure from Motion (SfM) on a fisheye camera, a Velodyne Lidar and a radar are used to generate one sensor-specific map for each of the three sensors modalities. A map fusion algorithm generates the final combined map and a parking free-space detection algorithm extracts free space from this fused map that is large enough to fit a car for perpendicular parking. Localization of the vehicle in the maps is provided by an Inertial Navigation System (INS) combining GPS (Global Positioning System) and IMU (Inertial Measurement Unit). The demonstrated functions are necessary requirements for automated/valet parking of a vehicle.

The resulting maps can either be visualized in real-time or saved to file.

The application makes use of the below application libraries,

  1. Applib for OGMap Generation using Surround SFM
  2. Applib for OGMap Generation using Surround Radar
  3. Applib for OGMap Generation using Lidar
  4. Applib for OGMap Generation using Fusion

The input data for this application can be generated using the below additional applications

  1. Structure from Motion (SfM) on Fish-eye Camera Application

Supported plaforms

Platform Linux x86_64 Linux+RTOS mode QNX+RTOS mode SoC
Support N0 YES NO J721e

Data flow

vx_app_valet_parking_data_flow.png

Steps to run the application

  1. Boot the EVM up.
  2. Run the following commands:
    source /opt/vision_apps/vision_apps_init.sh
    /opt/vision_apps/run_ptk_demo.sh
    Pick option 6 for stationary OG map. Pick option 7 for Dempster-Shafer OG map.

Sample Output

Shown below is an example output visualization showing maps for radar-only, lidar-only, camera-only and fused.

vx_app_valet_parking_out.png