On Assessing Driver Awareness of Situational Criticalities: Multi-modal Bio-sensing and Vision-based Analysis, Evaluations and Insights

09/30/2019
by   Siddharth, et al.
0

Automobiles for our roadways are increasingly utilizing advanced driver assistance systems. These changes require us to develop novel perception systems not only for accurately understanding the situations and context of these vehicles, but also to understand the awareness of the driver in differentiating between safe and critical situations. The research presented in this paper is focused on this specific problem. Even after the development of wearable and compact multi-modal bio-sensing systems in recent years, their application in driver awareness context has been scarcely explored. The capability of simultaneously recording different kinds of bio-sensing data in addition to traditionally used computer vision systems provide exciting opportunities to explore the limitations of these modalities. In this work, we explore the applications of three different bio-sensing modalities namely electroencephalogram (EEG), photoplethysmogram (PPG) and galvanic skin response (GSR) along with a camera-based vision system in driver awareness context. We assess the information from these sensors independently and together using both signal processing and deep learning tools. We show that our methods outperform previously reported studies in the context of monitoring driver awareness and detecting hazardous/non-hazardous situations for short time scales of two-seconds. We verify our methods by collecting user data on twelve subjects for two real-world driving datasets among which one is publicly available (KITTI dataset) and one that we collect ourselves (LISA dataset) with the vehicle being driven in autonomous mode. This work presents an exhaustive evaluation of multiple sensor modalities on two different datasets for attention monitoring and hazardous events classification.

READ FULL TEXT

page 1

page 4

page 5

page 7

research
05/01/2019

Attention Monitoring and Hazard Assessment with Bio-Sensing and Vision: Empirical Analysis Utilizing CNNs on the KITTI Dataset

Assessing the driver's attention and detecting various hazardous and non...
research
05/16/2019

Utilizing Deep Learning Towards Multi-modal Bio-sensing and Vision-based Affective Computing

In recent years, the use of bio-sensing signals such as electroencephalo...
research
06/07/2018

Learning Multi-Modal Self-Awareness Models for Autonomous Vehicles from Human Driving

This paper presents a novel approach for learning self-awareness models ...
research
08/27/2020

DMD: A Large-Scale Multi-Modal Driver Monitoring Dataset for Attention and Alertness Analysis

Vision is the richest and most cost-effective technology for Driver Moni...
research
02/26/2019

Robust and Subject-Independent Driving Manoeuvre Anticipation through Domain-Adversarial Recurrent Neural Networks

Through deep learning and computer vision techniques, driving manoeuvres...
research
11/26/2016

What Can Be Predicted from Six Seconds of Driver Glances?

We consider a large dataset of real-world, on-road driving from a 100-ca...

Please sign up or login with your details

Forgot password? Click here to reset