An AGI with Time-Inconsistent Preferences

06/23/2019
by   James D. Miller, et al.
6

This paper reveals a trap for artificial general intelligence (AGI) theorists who use economists' standard method of discounting. This trap is implicitly and falsely assuming that a rational AGI would have time-consistent preferences. An agent with time-inconsistent preferences knows that its future self will disagree with its current self concerning intertemporal decision making. Such an agent cannot automatically trust its future self to carry out plans that its current self considers optimal.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset