Explainable AI: Beware of Inmates Running the Asylum Or: How I Learnt to Stop Worrying and Love the Social and Behavioural Sciences

12/02/2017
by   Tim Miller, et al.
0

In his seminal book `The Inmates are Running the Asylum: Why High-Tech Products Drive Us Crazy And How To Restore The Sanity' [2004, Sams Indianapolis, IN, USA], Alan Cooper argues that a major reason why software is often poorly designed (from a user perspective) is that programmers are in charge of design decisions, rather than interaction designers. As a result, programmers design software for themselves, rather than for their target audience, a phenomenon he refers to as the `inmates running the asylum'. This paper argues that explainable AI risks a similar fate. While the re-emergence of explainable AI is positive, this paper argues most of us as AI researchers are building explanatory agents for ourselves, rather than for the intended users. But explainable AI is more likely to succeed if researchers and practitioners understand, adopt, implement, and improve models from the vast and valuable bodies of research in philosophy, psychology, and cognitive science, and if evaluation of these models is focused more on people than on technology. From a light scan of literature, we demonstrate that there is considerable scope to infuse more results from the social and behavioural sciences into explainable AI, and present some key results from these fields that are relevant to explainable AI.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/20/2022

Towards Human-centered Explainable AI: User Studies for Model Explanations

Explainable AI (XAI) is widely viewed as a sine qua non for ever-expandi...
research
12/13/2018

Interaction Design for Explainable AI: Workshop Proceedings

As artificial intelligence (AI) systems become increasingly complex and ...
research
01/26/2020

Explainable Artificial Intelligence and Machine Learning: A reality rooted perspective

We are used to the availability of big data generated in nearly all fiel...
research
02/02/2023

Towards Modelling and Verification of Social Explainable AI

Social Explainable AI (SAI) is a new direction in artificial intelligenc...
research
12/11/2022

Towards a Learner-Centered Explainable AI: Lessons from the learning sciences

In this short paper, we argue for a refocusing of XAI around human learn...
research
02/19/2021

To Trust or to Think: Cognitive Forcing Functions Can Reduce Overreliance on AI in AI-assisted Decision-making

People supported by AI-powered decision support tools frequently overrel...
research
11/21/2017

Toward Foraging for Understanding of StarCraft Agents: An Empirical Study

Assessing and understanding intelligent agents is a difficult task for u...

Please sign up or login with your details

Forgot password? Click here to reset