Virtuously Safe Reinforcement Learning

05/29/2018
by   Henrik Aslund, et al.
0

We show that when a third party, the adversary, steps into the two-party setting (agent and operator) of safely interruptible reinforcement learning, a trade-off has to be made between the probability of following the optimal policy in the limit, and the probability of escaping a dangerous situation created by the adversary. So far, the work on safely interruptible agents has assumed a perfect perception of the agent about its environment (no adversary), and therefore implicitly set the second probability to zero, by explicitly seeking a value of one for the first probability. We show that (1) agents can be made both interruptible and adversary-resilient, and (2) the interruptibility can be made safe in the sense that the agent itself will not seek to avoid it. We also solve the problem that arises when the agent does not go completely greedy, i.e. issues with safe exploration in the limit. Resilience to perturbed perception, safe exploration in the limit, and safe interruptibility are the three pillars of what we call virtuously safe reinforcement learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/05/2020

Curiosity Killed the Cat and the Asymptotically Optimal Agent

Reinforcement learners are agents that learn to pick actions that lead t...
research
08/25/2021

Adversary agent reinforcement learning for pursuit-evasion

A reinforcement learning environment with adversary agents is proposed i...
research
03/31/2019

Risk Averse Robust Adversarial Reinforcement Learning

Deep reinforcement learning has recently made significant progress in so...
research
04/10/2017

Dynamic Safe Interruptibility for Decentralized Multi-Agent Reinforcement Learning

In reinforcement learning, agents learn by performing actions and observ...
research
11/18/2022

Provable Defense against Backdoor Policies in Reinforcement Learning

We propose a provable defense mechanism against backdoor policies in rei...
research
11/21/2022

Backdoor Attacks on Multiagent Collaborative Systems

Backdoor attacks on reinforcement learning implant a backdoor in a victi...
research
10/01/2020

Learning to be safe, in finite time

This paper aims to put forward the concept that learning to take safe ac...

Please sign up or login with your details

Forgot password? Click here to reset