Commonsense Visual Sensemaking for Autonomous Driving: On Generalised Neurosymbolic Online Abduction Integrating Vision and Semantics

12/28/2020
by   Jakob Suchan, et al.
3

We demonstrate the need and potential of systematically integrated vision and semantics solutions for visual sensemaking in the backdrop of autonomous driving. A general neurosymbolic method for online visual sensemaking using answer set programming (ASP) is systematically formalised and fully implemented. The method integrates state of the art in visual computing, and is developed as a modular framework that is generally usable within hybrid architectures for realtime perception and control. We evaluate and demonstrate with community established benchmarks KITTIMOD, MOT-2017, and MOT-2020. As use-case, we focus on the significance of human-centred visual sensemaking – e.g., involving semantic representation and explainability, question-answering, commonsense interpolation – in safety-critical autonomous driving situations. The developed neurosymbolic framework is domain-independent, with the case of autonomous driving designed to serve as an exemplar for online visual sensemaking in diverse cognitive interaction settings in the backdrop of select human-centred AI technology design considerations. Keywords: Cognitive Vision, Deep Semantics, Declarative Spatial Reasoning, Knowledge Representation and Reasoning, Commonsense Reasoning, Visual Abduction, Answer Set Programming, Autonomous Driving, Human-Centred Computing and Design, Standardisation in Driving Technology, Spatial Cognition and AI.

READ FULL TEXT

page 6

page 15

page 23

page 24

page 25

page 33

page 34

page 42

research
05/31/2019

Out of Sight But Not Out of Mind: An Answer Set Programming Based Online Abduction Framework for Visual Sensemaking in Autonomous Driving

We demonstrate the need and potential of systematically integrated visio...
research
10/17/2021

AUTO-DISCERN: Autonomous Driving Using Common Sense Reasoning

Driving an automobile involves the tasks of observing surroundings, then...
research
12/03/2017

Visual Explanation by High-Level Abduction: On Answer-Set Programming Driven Reasoning about Moving Objects

We propose a hybrid architecture for systematically computing robust vis...
research
07/19/2023

Explaining Autonomous Driving Actions with Visual Question Answering

The end-to-end learning ability of self-driving vehicles has achieved si...
research
02/22/2018

Teaching Autonomous Driving Using a Modular and Integrated Approach

Autonomous driving is not one single technology but rather a complex sys...
research
05/29/2020

Towards a Human-Centred Cognitive Model of Visuospatial Complexity in Everyday Driving

We develop a human-centred, cognitive model of visuospatial complexity i...
research
10/10/2017

Deep Semantic Abstractions of Everyday Human Activities: On Commonsense Representations of Human Interactions

We propose a deep semantic characterization of space and motion categori...

Please sign up or login with your details

Forgot password? Click here to reset