Political economy of superhuman AI

09/25/2022
by   Mehmet S. Ismail, et al.
0

In this note, I study the institutions and game theoretic assumptions that would prevent the emergence of "superhuman-level" arfiticial general intelligence, denoted by AI*. These assumptions are (i) the "Freedom of the Mind," (ii) open source "access" to AI*, and (iii) rationality of the representative human agent, who competes against AI*. I prove that under these three assumptions it is impossible that an AI* exists. This result gives rise to two immediate recommendations for public policy. First, "cloning" digitally the human brain should be strictly regulated, and hypothetical AI*'s access to brain should be prohibited. Second, AI* research should be made widely, if not publicly, accessible.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/25/2020

Planning with Brain-inspired AI

This article surveys engineering and neuroscientific models of planning ...
research
02/12/2016

Energetics of the brain and AI

Does the energy requirements for the human brain give energy constraints...
research
05/29/2019

Unpredictability of AI

The young field of AI Safety is still in the process of identifying its ...
research
10/28/2021

AI Federalism: Shaping AI Policy within States in Germany

Recent AI governance research has focused heavily on the analysis of str...
research
09/07/2021

Dutch Comfort: The limits of AI governance through municipal registers

In this commentary, we respond to a recent editorial letter by Professor...
research
05/24/2023

A Game-Theoretic Framework for AI Governance

As a transformative general-purpose technology, AI has empowered various...
research
06/22/2018

The Foundations of Deep Learning with a Path Towards General Intelligence

Like any field of empirical science, AI may be approached axiomatically....

Please sign up or login with your details

Forgot password? Click here to reset