Unsupervised Real-to-Virtual Domain Unification for End-to-End Highway Driving

01/10/2018
by   Luona Yang, et al.
2

In the spectrum of vision-based autonomous driving, vanilla end-to-end models are not interpretable and suboptimal in performance, while mediated perception models require additional intermediate representations such as segmentation masks or detection bounding boxes, whose annotation can be prohibitively expensive as we move to a larger scale. Raw images and existing intermediate representations are also loaded with nuisance details that are irrelevant to the prediction of vehicle commands, e.g. the style of the car in front or the view beyond the road boundaries. More critically, all prior works fail to deal with the notorious domain shift if we were to merge data collected from different sources, which greatly hinders the model generalization ability. In this work, we address the above limitations by taking advantage of virtual data collected from driving simulators, and present DU-drive, an unsupervised real to virtual domain unification framework for end-to-end driving. It transforms real driving data to its canonical representation in the virtual domain, from which vehicle control commands are predicted. Our framework has several advantages: 1) it maps driving data collected from different source distributions into a unified domain, 2) it takes advantage of annotated virtual data which is free to obtain, 3) it learns an interpretable, canonical representation of driving image that is specialized for vehicle command prediction. Extensive experiments on two public highway driving datasets clearly demonstrate the performance superiority and interpretive capability of DU-drive.

READ FULL TEXT

page 1

page 3

page 6

page 7

research
09/16/2021

Real Time Monocular Vehicle Velocity Estimation using Synthetic Data

Vision is one of the primary sensing modalities in autonomous driving. I...
research
05/01/2015

DeepDriving: Learning Affordance for Direct Perception in Autonomous Driving

Today, there are two major paradigms for vision-based autonomous driving...
research
09/11/2019

Human Visual Attention Prediction Boosts Learning & Performance of Autonomous Driving Agents

Autonomous driving is a multi-task problem requiring a deep understandin...
research
07/26/2023

Patterns of Vehicle Lights: Addressing Complexities in Curation and Annotation of Camera-Based Vehicle Light Datasets and Metrics

This paper explores the representation of vehicle lights in computer vis...
research
05/20/2020

Label Efficient Visual Abstractions for Autonomous Driving

It is well known that semantic segmentation can be used as an effective ...
research
05/12/2018

BDD100K: A Diverse Driving Video Database with Scalable Annotation Tooling

Datasets drive vision progress and autonomous driving is a critical visi...
research
03/23/2023

NVAutoNet: Fast and Accurate 360^∘ 3D Perception For Self Driving

Robust real-time perception of 3D world is essential to the autonomous v...

Please sign up or login with your details

Forgot password? Click here to reset