The Stixel world: A medium-level representation of traffic scenes

04/02/2017
by   Marius Cordts, et al.
0

Recent progress in advanced driver assistance systems and the race towards autonomous vehicles is mainly driven by two factors: (1) increasingly sophisticated algorithms that interpret the environment around the vehicle and react accordingly, and (2) the continuous improvements of sensor technology itself. In terms of cameras, these improvements typically include higher spatial resolution, which as a consequence requires more data to be processed. The trend to add multiple cameras to cover the entire surrounding of the vehicle is not conducive in that matter. At the same time, an increasing number of special purpose algorithms need access to the sensor input data to correctly interpret the various complex situations that can occur, particularly in urban traffic. By observing those trends, it becomes clear that a key challenge for vision architectures in intelligent vehicles is to share computational resources. We believe this challenge should be faced by introducing a representation of the sensory data that provides compressed and structured access to all relevant visual content of the scene. The Stixel World discussed in this paper is such a representation. It is a medium-level model of the environment that is specifically designed to compress information about obstacles by leveraging the typical layout of outdoor traffic scenes. It has proven useful for a multitude of automotive vision applications, including object detection, tracking, segmentation, and mapping. In this paper, we summarize the ideas behind the model and generalize it to take into account multiple dense input streams: the image itself, stereo depth maps, and semantic class probability maps that can be generated, e.g., by CNNs. Our generalization is embedded into a novel mathematical formulation for the Stixel model. We further sketch how the free parameters of the model can be learned using structured SVMs.

READ FULL TEXT

page 4

page 23

page 26

research
08/12/2020

DAWN: Vehicle Detection in Adverse Weather Nature Dataset

Recently, self-driving vehicles have been introduced with several automa...
research
12/04/2012

An ontology-based approach to relax traffic regulation for autonomous vehicle assistance

Traffic regulation must be respected by all vehicles, either human- or c...
research
12/10/2021

3D Scene Understanding at Urban Intersection using Stereo Vision and Digital Map

The driving behavior at urban intersections is very complex. It is thus ...
research
09/17/2018

Real-Time Dense Mapping for Self-driving Vehicles using Fisheye Cameras

We present a real-time dense geometric mapping algorithm for large-scale...
research
04/21/2023

Inverse Universal Traffic Quality – a Criticality Metric for Crowded Urban Traffic Scenes

An essential requirement for scenario-based testing the identification o...
research
09/21/2017

A Multimodal, Full-Surround Vehicular Testbed for Naturalistic Studies and Benchmarking: Design, Calibration and Deployment

Recent progress in autonomous and semi-autonomous driving has been made ...

Please sign up or login with your details

Forgot password? Click here to reset