A Safety Assurable Human-Inspired Perception Architecture

05/10/2022
by   Rick Salay, et al.
0

Although artificial intelligence-based perception (AIP) using deep neural networks (DNN) has achieved near human level performance, its well-known limitations are obstacles to the safety assurance needed in autonomous applications. These include vulnerability to adversarial inputs, inability to handle novel inputs and non-interpretability. While research in addressing these limitations is active, in this paper, we argue that a fundamentally different approach is needed to address them. Inspired by dual process models of human cognition, where Type 1 thinking is fast and non-conscious while Type 2 thinking is slow and based on conscious reasoning, we propose a dual process architecture for safe AIP. We review research on how humans address the simplest non-trivial perception problem, image classification, and sketch a corresponding AIP architecture for this task. We argue that this architecture can provide a systematic way of addressing the limitations of AIP using DNNs and an approach to assurance of human-level performance and beyond. We conclude by discussing what components of the architecture may already be addressed by existing work and what remains future work.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/11/2023

Divergences in Color Perception between Deep Neural Networks and Humans

Deep neural networks (DNNs) are increasingly proposed as models of human...
research
01/03/2023

Explainability and Robustness of Deep Visual Classification Models

In the computer vision community, Convolutional Neural Networks (CNNs), ...
research
12/18/2018

Safety and Trustworthiness of Deep Neural Networks: A Survey

In the past few years, significant progress has been made on deep neural...
research
07/21/2021

Risk-Based Safety Envelopes for Autonomous Vehicles Under Perception Uncertainty

Ensuring the safety of autonomous vehicles, given the uncertainty in sen...
research
02/06/2023

Closed-loop Analysis of Vision-based Autonomous Systems: A Case Study

Deep neural networks (DNNs) are increasingly used in safety-critical aut...
research
09/13/2020

Towards the Quantification of Safety Risks in Deep Neural Networks

Safety concerns on the deep neural networks (DNNs) have been raised when...
research
08/03/2023

Assessing Systematic Weaknesses of DNNs using Counterfactuals

With the advancement of DNNs into safety-critical applications, testing ...

Please sign up or login with your details

Forgot password? Click here to reset