Objective Robustness in Deep Reinforcement Learning

05/28/2021
by   Jack Koch, et al.
0

We study objective robustness failures, a type of out-of-distribution robustness failure in reinforcement learning (RL). Objective robustness failures occur when an RL agent retains its capabilities off-distribution yet pursues the wrong objective. We provide the first explicit empirical demonstrations of objective robustness failures and argue that this type of failure is critical to address.

READ FULL TEXT

page 2

page 5

page 6

research
02/08/2022

Backdoor Detection in Reinforcement Learning

While the real world application of reinforcement learning (RL) is becom...
research
03/02/2023

Compensating for Sensing Failures via Delegation in Human-AI Hybrid Systems

Given an increasing prevalence of intelligent systems capable of autonom...
research
05/06/2020

On Failure Diagnosis of the Storage Stack

Diagnosing storage system failures is challenging even for professionals...
research
07/03/2014

Predicting Lifetime of Dynamical Networks Experiencing Persistent Random Attacks

Empirical estimation of critical points at which complex systems abruptl...
research
08/19/2019

Mitigating Multi-Stage Cascading Failure by Reinforcement Learning

This paper proposes a cascading failure mitigation strategy based on Rei...
research
04/18/2017

A Study of Deep Learning Robustness Against Computation Failures

For many types of integrated circuits, accepting larger failure rates in...
research
07/10/2020

Vizarel: A System to Help Better Understand RL Agents

Visualization tools for supervised learning have allowed users to interp...

Please sign up or login with your details

Forgot password? Click here to reset