Sensor Fusion for Joint 3D Object Detection and Semantic Segmentation

04/25/2019
by   Gregory P. Meyer, et al.
0

In this paper, we present an extension to LaserNet, an efficient and state-of-the-art LiDAR based 3D object detector. We propose a method for fusing image data with the LiDAR data and show that this sensor fusion method improves the detection performance of the model especially at long ranges. The addition of image data is straightforward and does not require image labels. Furthermore, we expand the capabilities of the model to perform 3D semantic segmentation in addition to 3D object detection. On a large benchmark dataset, we demonstrate our approach achieves state-of-the-art performance on both object detection and semantic segmentation while maintaining a low runtime.

READ FULL TEXT

page 1

page 3

page 6

research
06/01/2019

RGB and LiDAR fusion based 3D Semantic Segmentation for Autonomous Driving

LiDAR has become a standard sensor for autonomous driving applications a...
research
11/22/2019

PointPainting: Sequential Fusion for 3D Object Detection

Camera and lidar are important sensor modalities for robotics in general...
research
05/06/2019

Simultaneous Object Detection and Semantic Segmentation

Both object detection in and semantic segmentation of camera images are ...
research
06/17/2020

Fast Object Classification and Meaningful Data Representation of Segmented Lidar Instances

Object detection algorithms for Lidar data have seen numerous publicatio...
research
07/30/2018

Modular Sensor Fusion for Semantic Segmentation

Sensor fusion is a fundamental process in robotic systems as it extends ...
research
03/27/2023

Learning to Zoom and Unzoom

Many perception systems in mobile computing, autonomous navigation, and ...
research
11/11/2021

Indian Licence Plate Dataset in the wild

Indian Licence Plate Detection is a problem that has not been explored m...

Please sign up or login with your details

Forgot password? Click here to reset