Impossibility and Uncertainty Theorems in AI Value Alignment (or why your AGI should not have a utility function)

12/31/2018
by   Peter Eckersley, et al.
0

Utility functions or their equivalents (value functions, objective functions, loss functions, reward functions, preference orderings) are a central tool in most current machine learning systems. These mechanisms for defining goals and guiding optimization run into practical and conceptual difficulty when there are independent, multi-dimensional objectives that need to be pursued simultaneously and cannot be reduced to each other. Ethicists have proved several impossibility theorems that stem from this origin; those results appear to show that there is no way of formally specifying what it means for an outcome to be good for a population without violating strong human ethical intuitions (in such cases, the objective function is a social welfare function). We argue that this is a practical problem for any machine learning system (such as medical decision support systems or autonomous weapons) or rigidly rule-based bureaucracy that will make high stakes decisions about human lives: such systems should not use objective functions in the strict mathematical sense. We explore the alternative of using uncertain objectives, represented for instance as partially ordered preferences, or as probability distributions over total orders. We show that previously known impossibility theorems can be transformed into uncertainty theorems in both of those settings, and prove lower bounds on how much uncertainty is implied by the impossibility results. We close by proposing two conjectures about the relationship between uncertainty in objectives and severe unintended consequences from AI systems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/02/2022

Decisions with Uncertain Consequences – A Total Ordering on Loss-Distributions

Decisions are often based on imprecise, uncertain or vague information. ...
research
07/19/2023

Absolutist AI

This paper argues that training AI systems with absolute constraints – w...
research
03/13/2021

CACTUS: Detecting and Resolving Conflicts in Objective Functions

Machine learning (ML) models are constructed by expert ML practitioners ...
research
06/30/2019

Requisite Variety in Ethical Utility Functions for AI Value Alignment

Being a complex subject of major importance in AI Safety research, value...
research
03/08/2021

Robust decision analysis under severe uncertainty and ambiguous tradeoffs: an invasive species case study

Bayesian decision analysis is a useful method for risk management decisi...
research
01/23/2013

Qualitative Models for Decision Under Uncertainty without the Commensurability Assumption

This paper investigates a purely qualitative version of Savage's theory ...
research
03/11/2021

Understanding the origin of information-seeking exploration in probabilistic objectives for control

The exploration-exploitation trade-off is central to the description of ...

Please sign up or login with your details

Forgot password? Click here to reset