Fusion of LiDAR and Camera Sensor Data for Environment Sensing in Driverless Vehicles

10/17/2017
by   Varuna De Silva, et al.
0

Driverless vehicles operate by sensing and perceiving its surrounding environment to make the accurate driving decisions. A combination of several different sensors such as LiDAR, radar, ultrasound sensors and cameras are utilized to sense the surrounding environment of driverless vehicles. The heterogeneous sensors simultaneously capture various physical attributes of the environment. Such multimodality and redundancy of sensing need to be positively utilized for reliable and consistent perception of the environment through sensor data fusion. However, these multimodal sensor data streams are different from each other in many ways, such as temporal and spatial resolution, data format, and geometric alignment. For the subsequent perception algorithms to utilize the diversity offered by multimodal sensing, the data streams need to be spatially, geometrically and temporally aligned with each other. In this paper, we address the problem of fusing the outputs of a Light Detection and Ranging (LiDAR) scanner and a wide-angle monocular image sensor. The outputs of LiDAR scanner and the image sensor are of different spatial resolutions and need to be aligned with each other. A geometrical model is used to spatially align the two sensor outputs, followed by a Gaussian Process (GP) regression based resolution matching algorithm to interpolate the missing data with quantifiable uncertainty. The results indicate that the proposed sensor data fusion framework significantly aids the subsequent perception steps, as illustrated by the performance improvement of a typical free space detection algorithm.

READ FULL TEXT

page 2

page 4

page 6

page 8

research
09/06/2023

White paper on Selected Environmental Parameters affecting Autonomous Vehicle (AV) Sensors

Autonomous Vehicles (AVs) being developed these days rely on various sen...
research
03/27/2021

CalibDNN: Multimodal Sensor Calibration for Perception Using Deep Neural Networks

Current perception systems often carry multimodal imagers and sensors su...
research
04/23/2021

On the Role of Sensor Fusion for Object Detection in Future Vehicular Networks

Fully autonomous driving systems require fast detection and recognition ...
research
07/20/2023

Probabilistic Multimodal Depth Estimation Based on Camera-LiDAR Sensor Fusion

Multi-modal depth estimation is one of the key challenges for endowing a...
research
01/17/2022

HydraFusion: Context-Aware Selective Sensor Fusion for Robust and Efficient Autonomous Vehicle Perception

Although autonomous vehicles (AVs) are expected to revolutionize transpo...
research
01/06/2021

ISETAuto: Detecting vehicles with depth and radiance information

Autonomous driving applications use two types of sensor systems to ident...
research
01/08/2018

Generative Sensing: Transforming Unreliable Sensor Data for Reliable Recognition

This paper introduces a deep learning enabled generative sensing framewo...

Please sign up or login with your details

Forgot password? Click here to reset