A Sim2Real Deep Learning Approach for the Transformation of Images from Multiple Vehicle-Mounted Cameras to a Semantically Segmented Image in Bird's Eye View

05/08/2020
by   Lennart Reiher, et al.
22

Accurate environment perception is essential for automated driving. When using monocular cameras, the distance estimation of elements in the environment poses a major challenge. Distances can be more easily estimated when the camera perspective is transformed to a bird's eye view (BEV). For flat surfaces, Inverse Perspective Mapping (IPM) can accurately transform images to a BEV. Three-dimensional objects such as vehicles and vulnerable road users are distorted by this transformation making it difficult to estimate their position relative to the sensor. This paper describes a methodology to obtain a corrected 360 BEV image given images from multiple vehicle-mounted cameras. The corrected BEV image is segmented into semantic classes and includes a prediction of occluded areas. The neural network approach does not rely on manually labeled data, but is trained on a synthetic dataset in such a way that it generalizes well to real-world data. By using semantically segmented images as input, we reduce the reality gap between simulated and real-world data and are able to show that our method can be successfully applied in the real world. Extensive experiments conducted on the synthetic data demonstrate the superiority of our approach compared to IPM. Source code and datasets are available at https://github.com/ika-rwth-aachen/Cam2BEV

READ FULL TEXT

page 1

page 3

page 4

page 6

page 7

research
03/29/2021

Monocular 3D Vehicle Detection Using Uncalibrated Traffic Cameras through Homography

This paper proposes a method to extract the position and pose of vehicle...
research
12/03/2018

The Right (Angled) Perspective: Improving the Understanding of Road Scenes using Boosted Inverse Perspective Mapping

Many tasks performed by autonomous vehicles such as road marking detecti...
research
11/15/2022

Data-Driven Occupancy Grid Mapping using Synthetic and Real-World Data

In perception tasks of automated vehicles (AVs) data-driven have often o...
research
04/21/2021

FIERY: Future Instance Prediction in Bird's-Eye View from Surround Monocular Cameras

Driving requires interacting with road agents and predicting their futur...
research
11/15/2021

Towards Optimal Strategies for Training Self-Driving Perception Models in Simulation

Autonomous driving relies on a huge volume of real-world data to be labe...
research
06/26/2017

Learning to Map Vehicles into Bird's Eye View

Awareness of the road scene is an essential component for both autonomou...
research
09/07/2023

FisheyePP4AV: A privacy-preserving method for autonomous vehicles on fisheye camera images

In many parts of the world, the use of vast amounts of data collected on...

Please sign up or login with your details

Forgot password? Click here to reset