Understanding Learned Reward Functions

12/10/2020
by   Eric J. Michaud, et al.
16

In many real-world tasks, it is not possible to procedurally specify an RL agent's reward function. In such cases, a reward function must instead be learned from interacting with and observing humans. However, current techniques for reward learning may fail to produce reward functions which accurately reflect user preferences. Absent significant advances in reward learning, it is thus important to be able to audit learned reward functions to verify whether they truly capture user preferences. In this paper, we investigate techniques for interpreting learned reward functions. In particular, we apply saliency methods to identify failure modes and predict the robustness of reward functions. We find that learned reward functions often implement surprising algorithms that rely on contingent aspects of the environment. We also discover that existing interpretability techniques often attend to irrelevant changes in reward output, suggesting that reward interpretability may need significantly different methods from policy interpretability.

READ FULL TEXT

page 4

page 7

research
03/25/2022

Preprocessing Reward Functions for Interpretability

In many real-world applications, the reward function is too complex to b...
research
01/09/2023

On The Fragility of Learned Reward Functions

Reward functions are notoriously difficult to specify, especially for ta...
research
06/22/2023

Can Differentiable Decision Trees Learn Interpretable Reward Functions?

There is an increasing interest in learning reward functions that model ...
research
03/03/2021

Preference-based Learning of Reward Function Features

Preference-based learning of reward functions, where the reward function...
research
01/24/2019

Learning Independently-Obtainable Reward Functions

We present a novel method for learning a set of disentangled reward func...
research
07/12/2023

Diagnosis, Feedback, Adaptation: A Human-in-the-Loop Framework for Test-Time Policy Adaptation

Policies often fail due to distribution shift – changes in the state and...
research
11/20/2022

Noisy Symbolic Abstractions for Deep RL: A case study with Reward Machines

Natural and formal languages provide an effective mechanism for humans t...

Please sign up or login with your details

Forgot password? Click here to reset