Cognitive Accident Prediction in Driving Scenes: A Multimodality Benchmark

12/19/2022
by   Jianwu Fang, et al.
7

Traffic accident prediction in driving videos aims to provide an early warning of the accident occurrence, and supports the decision making of safe driving systems. Previous works usually concentrate on the spatial-temporal correlation of object-level context, while they do not fit the inherent long-tailed data distribution well and are vulnerable to severe environmental change. In this work, we propose a Cognitive Accident Prediction (CAP) method that explicitly leverages human-inspired cognition of text description on the visual observation and the driver attention to facilitate model training. In particular, the text description provides a dense semantic description guidance for the primary context of the traffic scene, while the driver attention provides a traction to focus on the critical region closely correlating with safe driving. CAP is formulated by an attentive text-to-vision shift fusion module, an attentive scene context transfer module, and the driver attention guided accident prediction module. We leverage the attention mechanism in these modules to explore the core semantic cues for accident prediction. In order to train CAP, we extend an existing self-collected DADA-2000 dataset (with annotated driver attention for each frame) with further factual text descriptions for the visual observations before the accidents. Besides, we construct a new large-scale benchmark consisting of 11,727 in-the-wild accident videos with over 2.19 million frames (named as CAP-DATA) together with labeled fact-effect-reason-introspection description and temporal accident frame label. Based on extensive experiments, the superiority of CAP is validated compared with state-of-the-art approaches. The code, CAP-DATA, and all results will be released in <https://github.com/JWFanggit/LOTVS-CAP>.

READ FULL TEXT

page 1

page 3

page 5

page 10

page 11

research
12/18/2019

DADA: A Large-scale Benchmark and Model for Driver Attention Prediction in Accidental Scenarios

Driver attention prediction has recently absorbed increasing attention i...
research
04/23/2019

DADA-2000: Can Driving Accident be Predicted by Driver Attention? Analyzed by A Benchmark

Driver attention prediction is currently becoming the focus in safe driv...
research
06/18/2021

A Dynamic Spatial-temporal Attention Network for Early Anticipation of Traffic Accidents

Recently, autonomous vehicles and those equipped with an Advanced Driver...
research
07/21/2021

DRIVE: Deep Reinforced Accident Anticipation with Visual Explanation

Traffic accident anticipation aims to accurately and promptly predict th...
research
08/01/2020

Uncertainty-based Traffic Accident Anticipation with Spatio-Temporal Relational Learning

Traffic accident anticipation aims to predict accidents from dashcam vid...
research
05/19/2020

RoadText-1K: Text Detection Recognition Dataset for Driving Videos

Perceiving text is crucial to understand semantics of outdoor scenes and...
research
12/12/2018

Subjective Annotations for Vision-Based Attention Level Estimation

Attention level estimation systems have a high potential in many use cas...

Please sign up or login with your details

Forgot password? Click here to reset