Causal Campbell-Goodhart's law and Reinforcement Learning

11/02/2020
by   Hal Ashton, et al.
1

Campbell-Goodhart's law relates to the causal inference error whereby decision-making agents aim to influence variables which are correlated to their goal objective but do not reliably cause it. This is a well known error in Economics and Political Science but not widely labelled in Artificial Intelligence research. Through a simple example, we show how off-the-shelf deep Reinforcement Learning (RL) algorithms are not necessarily immune to this cognitive error. The off-policy learning method is tricked, whilst the on-policy method is not. The practical implication is that naive application of RL to complex real life problems can result in the same types of policy errors that humans make. Great care should be taken around understanding the causal model that underpins a solution derived from Reinforcement Learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/02/2022

Causal Counterfactuals for Improving the Robustness of Reinforcement Learning

Reinforcement learning (RL) is applied in a wide variety of fields. RL e...
research
10/10/2019

Causality and deceit: Do androids watch action movies?

We seek causes through science, religion, and in everyday life. We get e...
research
06/07/2021

Causal Influence Detection for Improving Efficiency in Reinforcement Learning

Many reinforcement learning (RL) environments consist of independent ent...
research
07/19/2022

Generalizing Goal-Conditioned Reinforcement Learning with Variational Causal Reasoning

As a pivotal component to attaining generalizable solutions in human int...
research
11/14/2021

Free Will Belief as a consequence of Model-based Reinforcement Learning

The debate on whether or not humans have free will has been raging for c...
research
05/27/2019

LAW: Learning to Auto Weight

Example weighting algorithm is an effective solution to the training bia...

Please sign up or login with your details

Forgot password? Click here to reset