Non-Asimov Explanations Regulating AI through Transparency

11/25/2021
by   Chris Reed, et al.
0

An important part of law and regulation is demanding explanations for actual and potential failures. We ask questions like: What happened (or might happen) to cause this failure? And why did (or might) it happen? These are disguised normative questions - they really ask what ought to have happened, and how the humans involved ought to have behaved. To answer the normative questions, law and regulation seeks a narrative explanation, a story. At present, we seek these kinds of narrative explanation from AI technology, because as humans we seek to understand technology's working through constructing a story to explain it. Our cultural history makes this inevitable - authors like Asimov, writing narratives about future AI technologies like intelligent robots, have told us that they act in ways explainable by the narrative logic which we use to explain human actions and so they can also be explained to us in those terms. This is, at least currently, not true. This work argues that we can only solve this problem by working from both sides. Technologists will need to find ways to tell us stories which law and regulation can use. But law and regulation will also need to accept different kinds of narratives, which tell stories about fundamental legal and regulatory concepts like fairness and reasonableness that are different from those we are used to.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

11/03/2017

Accountability of AI Under the Law: The Role of Explanation

The ubiquity of systems using artificial intelligence or "AI" has brough...
08/05/2017

e-QRAQ: A Multi-turn Reasoning Dataset and Simulator with Explanations

In this paper we present a new dataset and user simulator e-QRAQ (explai...
01/19/2019

Explaining Explanations to Society

There is a disconnect between explanatory artificial intelligence (XAI) ...
07/12/2021

How Could Equality and Data Protection Law Shape AI Fairness for People with Disabilities?

This article examines the concept of 'AI fairness' for people with disab...
11/18/2020

Explainable AI for System Failures: Generating Explanations that Improve Human Assistance in Fault Recovery

With the growing capabilities of intelligent systems, the integration of...
01/05/2021

Explainable AI for Robot Failures: Generating Explanations that Improve User Assistance in Fault Recovery

With the growing capabilities of intelligent systems, the integration of...
11/20/2019

Red Dragon AI at TextGraphs 2019 Shared Task: Language Model Assisted Explanation Generation

The TextGraphs-13 Shared Task on Explanation Regeneration asked particip...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.