Solving the Black Box Problem: A General-Purpose Recipe for Explainable Artificial Intelligence

03/03/2019
by   Carlos Zednik, et al.
0

Many of the computing systems developed using machine learning are opaque: it is difficult to explain why they do what they do, or how they work. The Explainable AI research program aims to develop analytic techniques for rendering such systems transparent, but lacks a general understanding of what it actually takes to do so. The aim of this discussion is to provide a general-purpose recipe for Explainable AI: A series of steps that should be taken to render an opaque computing system transparent. After analyzing the dual notions of 'opacity' and 'transparency', this recipe invokes David Marr's influential levels of analysis framework to characterize the different questions that should be asked about an opaque computing system, as well as the different ways in which these questions should be answered by different agents. By applying this recipe to recent techniques such as input heatmapping, feature-detector identification, and diagnostic classification, it will be possible to determine the extent to which Explainable AI can already solve the so-called Black Box Problem, as well as the extent to which more sophisticated techniques will be needed.

READ FULL TEXT

page 13

page 20

research
03/03/2019

Solving the Black Box Problem: A Normative Framework for Explainable Artificial Intelligence

Many of the computing systems programmed using Machine Learning are opaq...
research
01/10/2021

Explainable Artificial Intelligence (XAI): An Engineering Perspective

The remarkable advancements in Deep Learning (DL) algorithms have fueled...
research
01/26/2020

Explainable Artificial Intelligence and Machine Learning: A reality rooted perspective

We are used to the availability of big data generated in nearly all fiel...
research
04/21/2022

Evolution of Transparent Explainable Rule-sets

Most AI systems are black boxes generating reasonable outputs for given ...
research
08/31/2021

The five Is: Key principles for interpretable and safe conversational AI

In this position paper, we present five key principles, namely interpret...
research
10/27/2022

Painting the black box white: experimental findings from applying XAI to an ECG reading setting

The shift from symbolic AI systems to black-box, sub-symbolic, and stati...
research
12/28/2017

What do we need to build explainable AI systems for the medical domain?

Artificial intelligence (AI) generally and machine learning (ML) specifi...

Please sign up or login with your details

Forgot password? Click here to reset