Ethical Artificial Intelligence

11/05/2014
by   Bill Hibbard, et al.
0

This book-length article combines several peer reviewed papers and new material to analyze the issues of ethical artificial intelligence (AI). The behavior of future AI systems can be described by mathematical equations, which are adapted to analyze possible unintended AI behaviors and ways that AI designs can avoid them. This article makes the case for utility-maximizing agents and for avoiding infinite sets in agent definitions. It shows how to avoid agent self-delusion using model-based utility functions and how to avoid agents that corrupt their reward generators (sometimes called "perverse instantiation") using utility functions that evaluate outcomes at one point in time from the perspective of humans at a different point in time. It argues that agents can avoid unintended instrumental actions (sometimes called "basic AI drives" or "instrumental goals") by accurately learning human values. This article defines a self-modeling agent framework and shows how it can avoid problems of resource limits, being predicted by other agents, and inconsistency between the agent's utility function and its definition (one version of this problem is sometimes called "motivated value selection"). This article also discusses how future AI will differ from current AI, the politics of AI, and the ultimate use of AI to help understand the nature of the universe and our place in it.

READ FULL TEXT
research
11/17/2021

Sustainable Artificial Intelligence through Continual Learning

The increasing attention on Artificial Intelligence (AI) regulation has ...
research
04/06/2022

A Cognitive Framework for Delegation Between Error-Prone AI and Human Agents

With humans interacting with AI-based systems at an increasing rate, it ...
research
08/07/2020

On Multiple Intelligences and Learning Styles for Multi- Agent Systems: Future Research Trends in AI with a Human Face?

This article discusses recent trends and concepts in developing new kind...
research
06/19/2021

Post Selections Using Test Sets (PSUTS) and How Developmental Networks Avoid Them

This paper raises a rarely reported practice in Artificial Intelligence ...
research
06/23/2022

On Avoiding Power-Seeking by Artificial Intelligence

We do not know how to align a very intelligent AI agent's behavior with ...
research
11/16/2011

Model-based Utility Functions

Orseau and Ring, as well as Dewey, have recently described problems, inc...
research
10/08/2021

How Can AI Recognize Pain and Express Empathy

Sensory and emotional experiences such as pain and empathy are relevant ...

Please sign up or login with your details

Forgot password? Click here to reset