MAAD: A Model and Dataset for "Attended Awareness" in Driving

10/16/2021
by   Deepak Gopinath, et al.
0

We propose a computational model to estimate a person's attended awareness of their environment. We define attended awareness to be those parts of a potentially dynamic scene which a person has attended to in recent history and which they are still likely to be physically aware of. Our model takes as input scene information in the form of a video and noisy gaze estimates, and outputs visual saliency, a refined gaze estimate, and an estimate of the person's attended awareness. In order to test our model, we capture a new dataset with a high-precision gaze tracker including 24.5 hours of gaze sequences from 23 subjects attending to videos of driving scenes. The dataset also contains third-party annotations of the subjects' attended awareness based on observations of their scan path. Our results show that our model is able to reasonably estimate attended awareness in a controlled setting, and in the future could potentially be extended to real egocentric driving data to help enable more effective ahead-of-time warnings in safety systems and thereby augment driver performance. We also demonstrate our model's effectiveness on the tasks of saliency, gaze calibration, and denoising, using both our dataset and an existing saliency dataset. We make our model and dataset available at https://github.com/ToyotaResearchInstitute/att-aware/.

READ FULL TEXT

page 7

page 9

page 11

page 21

page 22

page 23

page 24

page 25

research
04/16/2021

Noise-Aware Saliency Prediction for Videos with Incomplete Gaze Data

Deep-learning-based algorithms have led to impressive results in visual-...
research
06/06/2023

Human-Object Interaction Prediction in Videos through Gaze Following

Understanding the human-object interactions (HOIs) from a video is essen...
research
12/09/2016

Following Gaze Across Views

Following the gaze of people inside videos is an important signal for un...
research
07/18/2023

Object-aware Gaze Target Detection

Gaze target detection aims to predict the image location where the perso...
research
10/23/2019

SalGaze: Personalizing Gaze Estimation Using Visual Saliency

Traditional gaze estimation methods typically require explicit user cali...
research
09/01/2023

Taken out of context: On measuring situational awareness in LLMs

We aim to better understand the emergence of `situational awareness' in ...
research
11/24/2019

"Looking at the right stuff" – Guided semantic-gaze for autonomous driving

In recent years, predicting driver's focus of attention has been a very ...

Please sign up or login with your details

Forgot password? Click here to reset