Avoiding Tampering Incentives in Deep RL via Decoupled Approval

11/17/2020
by   Jonathan Uesato, et al.
5

How can we design agents that pursue a given objective when all feedback mechanisms are influenceable by the agent? Standard RL algorithms assume a secure reward function, and can thus perform poorly in settings where agents can tamper with the reward-generating mechanism. We present a principled solution to the problem of learning from influenceable feedback, which combines approval with a decoupled feedback collection procedure. For a natural class of corruption functions, decoupled approval algorithms have aligned incentives both at convergence and for their local updates. Empirically, they also scale to complex 3D environments where tampering is possible.

READ FULL TEXT
research
11/17/2020

REALab: An Embedded Perspective on Tampering

This paper describes REALab, a platform for embedded agency research in ...
research
08/30/2023

Iterative Reward Shaping using Human Feedback for Correcting Reward Misspecification

A well-defined reward function is crucial for successful training of an ...
research
12/14/2021

Programmatic Reward Design by Example

Reward design is a fundamental problem in reinforcement learning (RL). A...
research
01/20/2022

Safe Deep RL in 3D Environments using Human Feedback

Agents should avoid unsafe behaviour during both training and deployment...
research
12/10/2021

How Private Is Your RL Policy? An Inverse RL Based Analysis Framework

Reinforcement Learning (RL) enables agents to learn how to perform vario...
research
03/10/2021

Maximum Entropy RL (Provably) Solves Some Robust RL Problems

Many potential applications of reinforcement learning (RL) require guara...
research
01/10/2022

The Effects of Reward Misspecification: Mapping and Mitigating Misaligned Models

Reward hacking – where RL agents exploit gaps in misspecified reward fun...

Please sign up or login with your details

Forgot password? Click here to reset