DeepDriving: Learning Affordance for Direct Perception in Autonomous Driving

05/01/2015
by   Chenyi Chen, et al.
0

Today, there are two major paradigms for vision-based autonomous driving systems: mediated perception approaches that parse an entire scene to make a driving decision, and behavior reflex approaches that directly map an input image to a driving action by a regressor. In this paper, we propose a third paradigm: a direct perception approach to estimate the affordance for driving. We propose to map an input image to a small number of key perception indicators that directly relate to the affordance of a road/traffic state for driving. Our representation provides a set of compact yet complete descriptions of the scene to enable a simple controller to drive autonomously. Falling in between the two extremes of mediated perception and behavior reflex, we argue that our direct perception representation provides the right level of abstraction. To demonstrate this, we train a deep Convolutional Neural Network using recording from 12 hours of human driving in a video game and show that our model can work well to drive a car in a very diverse set of virtual environments. We also train a model for car distance estimation on the KITTI dataset. Results show that our direct perception approach can generalize well to real driving images. Source code and data are available on our project website.

READ FULL TEXT

page 1

page 2

page 3

page 5

page 6

page 7

page 8

research
10/26/2019

Deep Learning and Control Algorithms of Direct Perception for Autonomous Driving

Based on the direct perception paradigm of autonomous driving, we invest...
research
05/05/2015

In Defense of the Direct Perception of Affordances

The field of functional recognition or affordance estimation from images...
research
03/20/2019

Affordance Learning In Direct Perception for Autonomous Driving

Recent development in autonomous driving involves high-level computer vi...
research
01/10/2018

Unsupervised Real-to-Virtual Domain Unification for End-to-End Highway Driving

In the spectrum of vision-based autonomous driving, vanilla end-to-end m...
research
12/05/2018

Visual Attention for Behavioral Cloning in Autonomous Driving

The goal of our work is to use visual attention to enhance autonomous dr...
research
12/07/2018

ChauffeurNet: Learning to Drive by Imitating the Best and Synthesizing the Worst

Our goal is to train a policy for autonomous driving via imitation learn...
research
11/15/2021

An Embarrassingly Pragmatic Introduction to Vision-based Autonomous Robots

Autonomous robots are currently one of the most popular Artificial Intel...

Please sign up or login with your details

Forgot password? Click here to reset