Value Alignment, Fair Play, and the Rights of Service Robots

03/07/2018
by   Daniel Estrada, et al.
0

Ethics and safety research in artificial intelligence is increasingly framed in terms of "alignment" with human values and interests. I argue that Turing's call for "fair play for machines" is an early and often overlooked contribution to the alignment literature. Turing's appeal to fair play suggests a need to correct human behavior to accommodate our machines, a surprising inversion of how value alignment is treated today. Reflections on "fair play" motivate a novel interpretation of Turing's notorious "imitation game" as a condition not of intelligence but instead of value alignment: a machine demonstrates a minimal degree of alignment (with the norms of conversation, for instance) when it can go undetected when interrogated by a human. I carefully distinguish this interpretation from the Moral Turing Test, which is not motivated by a principle of fair play, but instead depends on imitation of human moral behavior. Finally, I consider how the framework of fair play can be used to situate the debate over robot rights within the alignment literature. I argue that extending rights to service robots operating in public spaces is "fair" in precisely the sense that it encourages an alignment of interests between humans and machines.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/11/2022

The Meta-Turing Test

We propose an alternative to the Turing test that removes the inherent a...
research
09/02/2015

Turing's Imitation Game has been Improved

Using the recently introduced universal computing model, called orchestr...
research
05/25/2018

Psychophysics, Gestalts and Games

Many psychophysical studies are dedicated to the evaluation of the human...
research
01/11/2022

The Turing Trap: The Promise Peril of Human-Like Artificial Intelligence

In 1950, Alan Turing proposed an imitation game as the ultimate test of ...
research
01/13/2020

Artificial Intelligence, Values and Alignment

This paper looks at philosophical questions that arise in the context of...
research
10/25/2018

Mimetic vs Anchored Value Alignment in Artificial Intelligence

"Value alignment" (VA) is considered as one of the top priorities in AI ...
research
03/03/2021

Morality, Machines and the Interpretation Problem: A value-based, Wittgensteinian approach to building Moral Agents

We argue that the attempt to build morality into machines is subject to ...

Please sign up or login with your details

Forgot password? Click here to reset