Ethical Artificial Intelligence

by   Bill Hibbard, et al.

This book-length article combines several peer reviewed papers and new material to analyze the issues of ethical artificial intelligence (AI). The behavior of future AI systems can be described by mathematical equations, which are adapted to analyze possible unintended AI behaviors and ways that AI designs can avoid them. This article makes the case for utility-maximizing agents and for avoiding infinite sets in agent definitions. It shows how to avoid agent self-delusion using model-based utility functions and how to avoid agents that corrupt their reward generators (sometimes called "perverse instantiation") using utility functions that evaluate outcomes at one point in time from the perspective of humans at a different point in time. It argues that agents can avoid unintended instrumental actions (sometimes called "basic AI drives" or "instrumental goals") by accurately learning human values. This article defines a self-modeling agent framework and shows how it can avoid problems of resource limits, being predicted by other agents, and inconsistency between the agent's utility function and its definition (one version of this problem is sometimes called "motivated value selection"). This article also discusses how future AI will differ from current AI, the politics of AI, and the ultimate use of AI to help understand the nature of the universe and our place in it.



There are no comments yet.


page 24


Sustainable Artificial Intelligence through Continual Learning

The increasing attention on Artificial Intelligence (AI) regulation has ...

Ethical Artificial Intelligence - An Open Question

Artificial Intelligence (AI) is an effective science which employs stron...

On Multiple Intelligences and Learning Styles for Multi- Agent Systems: Future Research Trends in AI with a Human Face?

This article discusses recent trends and concepts in developing new kind...

Model-based Utility Functions

Orseau and Ring, as well as Dewey, have recently described problems, inc...

Corrigibility with Utility Preservation

Corrigibility is a safety property for artificially intelligent agents. ...

Post Selections Using Test Sets (PSUTS) and How Developmental Networks Avoid Them

This paper raises a rarely reported practice in Artificial Intelligence ...

(When) Is Truth-telling Favored in AI Debate?

For some problems, humans may not be able to accurately judge the goodne...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.