Multimodal End-to-End Autonomous Driving

06/07/2019
by   Yi Xiao, et al.
0

Autonomous vehicles (AVs) are key for the intelligent mobility of the future. A crucial component of an AV is the artificial intelligence (AI) able to drive towards a desired destination. Today, there are different paradigms addressing the development of AI drivers. On the one hand, we find modular pipelines, which divide the driving task into sub-tasks such as perception (object detection, semantic segmentation, depth estimation, tracking) and maneuver control (local path planing and control). On the other hand, we find end-to-end driving approaches that try to learn a direct mapping from input raw sensor data to vehicle control signals (the steering angle). The later are relatively less studied, but are gaining popularity since they are less demanding in terms of sensor data annotation. This paper focuses on end-to-end autonomous driving. So far, most proposals relying on this paradigm assume RGB images as input sensor data. However, AVs will not be equipped only with cameras, but also with active sensors providing accurate depth information (traditional LiDARs, or new solid state ones). Accordingly, this paper analyses if RGB and depth data, RGBD data, can actually act as complementary information in a multimodal end-to-end driving approach, producing a better AI driver. Using the CARLA simulator functionalities, its standard benchmark, and conditional imitation learning (CIL), we will show how, indeed, RGBD gives rise to more successful end-to-end AI drivers. We will compare the use of RGBD information by means of early, mid and late fusion schemes, both in multisensory and single-sensor (monocular depth estimation) settings.

READ FULL TEXT

page 1

page 7

page 8

research
03/21/2023

Penalty-Based Imitation Learning With Cross Semantics Generation Sensor Fusion for Autonomous Driving

With the rapid development of Pattern Recognition and Computer Vision te...
research
05/19/2020

Multi-modal Sensor Fusion-Based Deep Neural Network for End-to-end Autonomous Driving with Scene Understanding

This study aims to improve the control performance and generalization ca...
research
05/19/2018

End-to-end driving simulation via angle branched network

Imitation learning for end-to-end autonomous driving has drawn attention...
research
08/28/2023

End-to-End Driving via Self-Supervised Imitation Learning Using Camera and LiDAR Data

In autonomous driving, the end-to-end (E2E) driving approach that predic...
research
08/21/2020

Action-Based Representation Learning for Autonomous Driving

Human drivers produce a vast amount of data which could, in principle, b...
research
08/01/2023

DriveAdapter: Breaking the Coupling Barrier of Perception and Planning in End-to-End Autonomous Driving

End-to-end autonomous driving aims to build a fully differentiable syste...
research
02/11/2022

Multi-Modal Fusion for Sensorimotor Coordination in Steering Angle Prediction

Imitation learning is employed to learn sensorimotor coordination for st...

Please sign up or login with your details

Forgot password? Click here to reset