Superintelligence cannot be contained: Lessons from Computability Theory

07/04/2016
by   Manuel Alfonseca, et al.
0

Superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. In light of recent advances in machine intelligence, a number of scientists, philosophers and technologists have revived the discussion about the potential catastrophic risks entailed by such an entity. In this article, we trace the origins and development of the neo-fear of superintelligence, and some of the major proposals for its containment. We argue that such containment is, in principle, impossible, due to fundamental limits inherent to computing itself. Assuming that a superintelligence will contain a program that includes all the programs that can be executed by a universal Turing machine on input potentially as complex as the state of the world, strict containment requires simulations of such a program, something theoretically (and practically) infeasible.

READ FULL TEXT

page 2

page 6

research
04/11/2015

Quantitative Analysis of Whether Machine Intelligence Can Surpass Human Intelligence

Whether the machine intelligence can surpass the human intelligence is a...
research
10/06/2021

Reward-Punishment Symmetric Universal Intelligence

Can an agent's intelligence level be negative? We extend the Legg-Hutter...
research
06/21/2023

An Overview of Catastrophic AI Risks

Rapid advancements in artificial intelligence (AI) have sparked growing ...
research
05/04/2010

Informal Concepts in Machines

This paper constructively proves the existence of an effective procedure...
research
03/22/2019

Anti-Turing Machine

The invention of CPU-centric computing paradigm was incredible breakthro...
research
12/27/2019

The Epistemic Landscape: a Computability Perspective

By nature, transmissible human knowledge is enumerable: every sentence, ...

Please sign up or login with your details

Forgot password? Click here to reset