Godseed: Benevolent or Malevolent?

02/01/2014
by   Eray Özkural, et al.
0

It is hypothesized by some thinkers that benign looking AI objectives may result in powerful AI drives that may pose an existential risk to human society. We analyze this scenario and find the underlying assumptions to be unlikely. We examine the alternative scenario of what happens when universal goals that are not human-centric are used for designing AI agents. We follow a design approach that tries to exclude malevolent motivations from AI agents, however, we see that objectives that seem benevolent may pose significant risk. We consider the following meta-rules: preserve and pervade life and culture, maximize the number of free minds, maximize intelligence, maximize wisdom, maximize energy production, behave like human, seek pleasure, accelerate evolution, survive, maximize control, and maximize capital. We also discuss various solution approaches for benevolent behavior including selfless goals, hybrid designs, Darwinism, universal constraints, semi-autonomy, and generalization of robot laws. A "prime directive" for AI may help in formulating an encompassing constraint for avoiding malicious behavior. We hypothesize that social instincts for autonomous robots may be effective such as attachment learning. We mention multiple beneficial scenarios for an advanced semi-autonomous AGI agent in the near future including space exploration, automation of industries, state functions, and cities. We conclude that a beneficial AI agent with intelligence beyond human-level is possible and has many practical use cases.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/05/2017

Designing a Safe Autonomous Artificial Intelligence Agent based on Human Self-Regulation

There is a growing focus on how to design safe artificial intelligent (A...
research
07/13/2023

Towards a 6G embedding sustainability

From its conception, 6G is being designed with a particular focus on sus...
research
11/03/2018

Legible Normativity for AI Alignment: The Value of Silly Rules

It has become commonplace to assert that autonomous agents will have to ...
research
11/10/2015

Taxonomy of Pathways to Dangerous AI

In order to properly handle a dangerous Artificially Intelligent (AI) sy...
research
02/28/2023

Scenarios and branch points to future machine intelligence

We discuss scenarios and branch points to four major possible consequenc...
research
11/24/2016

The Off-Switch Game

It is clear that one of the primary tools we can use to mitigate the pot...
research
08/04/2020

Collecting the Public Perception of AI and Robot Rights

Whether to give rights to artificial intelligence (AI) and robots has be...

Please sign up or login with your details

Forgot password? Click here to reset