Explainable AI: A Neurally-Inspired Decision Stack Framework

08/27/2019
by   J. L. Olds, et al.
23

European Law now requires AI to be explainable in the context of adverse decisions affecting European Union (EU) citizens. At the same time, it is expected that there will be increasing instances of AI failure as it operates on imperfect data. This paper puts forward a neurally-inspired framework called decision stacks that can provide for a way forward in research aimed at developing explainable AI. Leveraging findings from memory systems in biological brains, the decision stack framework operationalizes the definition of explainability and then proposes a test that can potentially reveal how a given AI decision came to its conclusion.

READ FULL TEXT

page 2

page 4

page 5

research
02/21/2023

Aligning Explainable AI and the Law: The European Perspective

The European Union has proposed the Artificial Intelligence Act intendin...
research
02/28/2023

Expanding Explainability: From Explainable Artificial Intelligence to Explainable Hardware

The increasing opaqueness of AI and its growing influence on our digital...
research
05/09/2023

Logic for Explainable AI

A central quest in explainable AI relates to understanding the decisions...
research
07/18/2021

Desiderata for Explainable AI in statistical production systems of the European Central Bank

Explainable AI constitutes a fundamental step towards establishing fairn...
research
11/25/2022

The European AI Liability Directives – Critique of a Half-Hearted Approach and Lessons for the Future

The optimal liability framework for AI systems remains an unsolved probl...
research
07/05/2021

An Explainable AI System for the Diagnosis of High Dimensional Biomedical Data

Typical state of the art flow cytometry data samples consists of measure...
research
07/13/2023

Is Task-Agnostic Explainable AI a Myth?

Our work serves as a framework for unifying the challenges of contempora...

Please sign up or login with your details

Forgot password? Click here to reset