Turing's Red Flag

10/30/2015
by   Toby Walsh, et al.
0

Sometime in the future we will have to deal with the impact of AI's being mistaken for humans. For this reason, I propose that any autonomous system should be designed so that it is unlikely to be mistaken for anything besides an autonomous sysem, and should identify itself at the start of any interaction with another agent.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/11/2018

Gnirut: The Trouble With Being Born Human In An Autonomous World

What if we delegated so much to autonomous AI and intelligent machines t...
research
03/12/2018

Taking Turing by Surprise? Designing Digital Computers for morally-loaded contexts

There is much to learn from what Turing hastily dismissed as Lady Lovela...
research
03/30/2017

Enter the Matrix: A Virtual World Approach to Safely Interruptable Autonomous Systems

Robots and autonomous systems that operate around humans will likely alw...
research
02/20/2019

Empathic Autonomous Agents

Identifying and resolving conflicts of interests is a key challenge when...
research
05/31/2023

Human or Not? A Gamified Approach to the Turing Test

We present "Human or Not?", an online game inspired by the Turing test, ...
research
06/27/2016

Can Turing machine be curious about its Turing test results? Three informal lectures on physics of intelligence

What is the nature of curiosity? Is there any scientific way to understa...
research
04/30/2021

Human-Machine Interaction in the Light of Turing and Wittgenstein

We propose a study of the constitution of meaning in human-computer inte...

Please sign up or login with your details

Forgot password? Click here to reset