An AGI with Time-Inconsistent Preferences

06/23/2019
by   James D. Miller, et al.
6

This paper reveals a trap for artificial general intelligence (AGI) theorists who use economists' standard method of discounting. This trap is implicitly and falsely assuming that a rational AGI would have time-consistent preferences. An agent with time-inconsistent preferences knows that its future self will disagree with its current self concerning intertemporal decision making. Such an agent cannot automatically trust its future self to carry out plans that its current self considers optimal.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/27/2011

Time Consistent Discounting

A possibly immortal agent tries to maximise its summed discounted reward...
research
01/31/2010

α-Discounting Multi-Criteria Decision Making (α-D MCDM)

In this book we introduce a new procedure called α-Discounting Method fo...
research
08/14/2023

Distinguishing Risk Preferences using Repeated Gambles

Sequences of repeated gambles provide an experimental tool to characteri...
research
05/26/2013

Semi-bounded Rationality: A model for decision making

In this paper the theory of semi-bounded rationality is proposed as an e...
research
11/03/2020

Face-work for Human-Agent Joint Decision-Making

We propose a method to integrate face-work, a common social ritual relat...
research
08/09/2021

Bob and Alice Go to a Bar: Reasoning About Future With Probabilistic Programs

Agent preferences should be specified stochastically rather than determi...
research
07/20/2017

Revisiting Selectional Preferences for Coreference Resolution

Selectional preferences have long been claimed to be essential for coref...

Please sign up or login with your details

Forgot password? Click here to reset