Learning Driving Models with a Surround-View Camera System and a Route Planner

03/27/2018
by   Simon Hecker, et al.
0

For people, having a rear-view mirror and side-view mirrors is vital for safe driving. They deliver a better view of what happens around the car. Human drivers also heavily exploit their mental map for navigation. Nonetheless, several methods have been published that learn driving models with only a front-facing camera and without a route planner. This lack of information renders the self-driving task quite intractable. Hence, we investigate the problem with a more realistic setting, which consists of a surround-view camera system with eight cameras, a route planner, and a CAN bus reader. In particular, we develop a sensor setup that provides data for a 360-degree view of the area surrounding the vehicle, the driving route to the destination, and the low-level driving maneuvers (e.g. steering angle and speed) by human drivers. With such sensor setup we collect a new driving dataset, covering diverse driving scenarios and varying weather/illumination conditions. Finally, we learn a novel driving model by integrating information from the surround-view cameras and the route planner. Two route planners are exploited: one based on OpenStreetMap and the other on TomTom Maps. The route planners are exploited in two ways: 1) by representing the planned routes as a stack of GPS coordinates, and 2) by rendering the planned routes on a map and recording the progression into a video. Our experiments show that: 1) 360-degree surround-view cameras help avoid failures made with a single front-view camera for the driving task; and 2) a route planner helps the driving task significantly. We acknowledge that our method is not the best-ever driving model, but that is not our focus. Rather, it provides a strong basis for further academic research, especially on driving relevant tasks by integrating information from street-view images and the planned driving routes. Code and data will be made available.

READ FULL TEXT

page 2

page 7

page 9

page 14

page 20

page 21

research
10/07/2019

Learning Navigation by Visual Localization and Trajectory Prediction

When driving, people make decisions based on current traffic as well as ...
research
06/30/2022

LiDAR-as-Camera for End-to-End Driving

The core task of any autonomous driving system is to transform sensory i...
research
01/11/2023

Street-View Image Generation from a Bird's-Eye View Layout

Bird's-Eye View (BEV) Perception has received increasing attention in re...
research
06/01/2022

Towards view-invariant vehicle speed detection from driving simulator images

The use of cameras for vehicle speed measurement is much more cost effec...
research
10/16/2019

Conditional Driving from Natural Language Instructions

Widespread adoption of self-driving cars will depend not only on their s...
research
03/09/2022

SynWoodScape: Synthetic Surround-view Fisheye Camera Dataset for Autonomous Driving

Surround-view cameras are a primary sensor for automated driving, used f...
research
01/20/2019

Exploring Factors that Influence Connected Drivers to (Not) Use or Follow Recommended Optimal Routes

Navigation applications are becoming ubiquitous in our daily navigation ...

Please sign up or login with your details

Forgot password? Click here to reset