Camera-Lidar Integration: Probabilistic sensor fusion for semantic mapping

07/09/2020
by   Julie Stephany Berrio, et al.
0

An automated vehicle operating in an urban environment must be able to perceive and recognise object/obstacles in a three-dimensional world while navigating in a constantly changing environment. In order to plan and execute accurate sophisticated driving maneuvers, a high-level contextual understanding of the surroundings is essential. Due to the recent progress in image processing, it is now possible to obtain high definition semantic information in 2D from monocular cameras, though cameras cannot reliably provide the highly accurate 3D information provided by lasers. The fusion of these two sensor modalities can overcome the shortcomings of each individual sensor, though there are a number of important challenges that need to be addressed in a probabilistic manner. In this paper, we address the common, yet challenging, lidar/camera/semantic fusion problems which are seldom approached in a wholly probabilistic manner. Our approach is capable of using a multi-sensor platform to build a three-dimensional semantic voxelized map that considers the uncertainty of all of the processes involved. We present a probabilistic pipeline that incorporates uncertainties from the sensor readings (cameras, lidar, IMU and wheel encoders), compensation for the motion of the vehicle, and heuristic label probabilities for the semantic images. We also present a novel and efficient viewpoint validation algorithm to check for occlusions from the camera frames. A probabilistic projection is performed from the camera images to the lidar point cloud. Each labelled lidar scan then feeds into an octree map building algorithm that updates the class probabilities of the map voxels every time a new observation is available. We validate our approach using a set of qualitative and quantitative experimental tests on the USyd Dataset.

READ FULL TEXT

page 1

page 3

page 5

page 6

page 11

page 12

page 13

page 15

research
03/04/2020

Semantic sensor fusion: from camera to sparse lidar information

To navigate through urban roads, an automated vehicle must be able to pe...
research
03/09/2020

Probabilistic Egocentric Motion Correction of Lidar Point Cloud and Projection to Camera Images for Moving Platforms

The fusion of sensor data from heterogeneous sensors is crucial for robu...
research
09/10/2019

Bayesian Spatial Kernel Smoothing for ScalableDense Semantic Mapping

This paper develops a Bayesian continuous 3D semantic occupancy map from...
research
07/03/2023

Artifacts Mapping: Multi-Modal Semantic Mapping for Object Detection and 3D Localization

Geometric navigation is nowadays a well-established field of robotics an...
research
08/08/2022

Extrinsic Camera Calibration with Semantic Segmentation

Monocular camera sensors are vital to intelligent vehicle operation and ...
research
05/25/2018

Fusing Laser Scanner and Stereo Camera in Evidential Grid Maps

Automation driving techniques have seen tremendous progresses these last...
research
05/07/2020

A LiDAR-based real-time capable 3D Perception System for Automated Driving in Urban Domains

We present a LiDAR-based and real-time capable 3D perception system for ...

Please sign up or login with your details

Forgot password? Click here to reset