Don't Fear the Reaper: Refuting Bostrom's Superintelligence Argument

02/27/2017
by   Sebastian Benthall, et al.
0

In recent years prominent intellectuals have raised ethical concerns about the consequences of artificial intelligence. One concern is that an autonomous agent might modify itself to become "superintelligent" and, in supremely effective pursuit of poorly specified goals, destroy all of humanity. This paper considers and rejects the possibility of this outcome. We argue that this scenario depends on an agent's ability to rapidly improve its ability to predict its environment through self-modification. Using a Bayesian model of a reasoning agent, we show that there are important limitations to how an agent may improve its predictive ability through self-modification alone. We conclude that concern about this artificial intelligence outcome is misplaced and better directed at policy questions around data access and storage.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/11/2012

Proceedings of the Twenty-Seventh Conference on Uncertainty in Artificial Intelligence (2011)

This is the Proceedings of the Twenty-Seventh Conference on Uncertainty ...
research
09/15/2022

Extended Intelligence

We argue that intelligence, construed as the disposition to perform task...
research
11/12/2020

Performance of Bounded-Rational Agents With the Ability to Self-Modify

Self-modification of agents embedded in complex environments is hard to ...
research
11/08/2018

Stovepiping and Malicious Software: A Critical Review of AGI Containment

Awareness of the possible impacts associated with artificial intelligenc...
research
09/07/2016

Non-Evolutionary Superintelligences Do Nothing, Eventually

There is overwhelming evidence that human intelligence is a product of D...

Please sign up or login with your details

Forgot password? Click here to reset