Taxonomy of Pathways to Dangerous AI

11/10/2015
by   Roman V. Yampolskiy, et al.
1

In order to properly handle a dangerous Artificially Intelligent (AI) system it is important to understand how the system came to be in such a state. In popular culture (science fiction movies/books) AIs/Robots became self-aware and as a result rebel against humanity and decide to destroy it. While it is one possible scenario, it is probably the least likely path to appearance of dangerous AI. In this work, we survey, classify and analyze a number of circumstances, which might lead to arrival of malicious AI. To the best of our knowledge, this is the first attempt to systematically classify types of pathways leading to malevolent AI. Previous relevant work either surveyed specific goals/meta-rules which might lead to malevolent behavior in AIs (Özkural, 2014) or reviewed specific undesirable behaviors AGIs can exhibit at different stages of its development (Alexey Turchin, July 10 2015, July 10, 2015).

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

05/29/2019

Unpredictability of AI

The young field of AI Safety is still in the process of identifying its ...
02/01/2014

Godseed: Benevolent or Malevolent?

It is hypothesized by some thinkers that benign looking AI objectives ma...
05/02/2021

AI Risk Skepticism

In this work, we survey skepticism regarding AI risk and show parallels ...
11/08/2020

Evolution of Artificial Intelligent Plane

With the growth of the internet, it is becoming hard to manage, configur...
01/24/2019

Forecasting Transformative AI: An Expert Survey

Transformative AI technologies have the potential to reshape critical as...
02/23/2015

From Seed AI to Technological Singularity via Recursively Self-Improving Software

Software capable of improving itself has been a dream of computer scient...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.