ROSARL: Reward-Only Safe Reinforcement Learning

05/31/2023
by   Geraud Nangue Tasse, et al.
0

An important problem in reinforcement learning is designing agents that learn to solve tasks safely in an environment. A common solution is for a human expert to define either a penalty in the reward function or a cost to be minimised when reaching unsafe states. However, this is non-trivial, since too small a penalty may lead to agents that reach unsafe states, while too large a penalty increases the time to convergence. Additionally, the difficulty in designing reward or cost functions can increase with the complexity of the problem. Hence, for a given environment with a given set of unsafe states, we are interested in finding the upper bound of rewards at unsafe states whose optimal policies minimise the probability of reaching those unsafe states, irrespective of task rewards. We refer to this exact upper bound as the "Minmax penalty", and show that it can be obtained by taking into account both the controllability and diameter of an environment. We provide a simple practical model-free algorithm for an agent to learn this Minmax penalty while learning the task policy, and demonstrate that using it leads to agents that learn safe policies in high-dimensional continuous control environments.

READ FULL TEXT

page 1

page 6

page 7

page 13

page 14

page 15

page 17

research
12/07/2022

Specifying Behavior Preference with Tiered Reward Functions

Reinforcement-learning agents seek to maximize a reward signal through e...
research
05/25/2021

Safe Value Functions

The relationship between safety and optimality in control is not well un...
research
05/21/2018

Learning Safe Policies with Expert Guidance

We propose a framework for ensuring safe behavior of a reinforcement lea...
research
03/23/2019

TTR-Based Rewards for Reinforcement Learning with Implicit Model Priors

Model-free reinforcement learning (RL) provides an attractive approach f...
research
04/20/2022

Joint Learning of Reward Machines and Policies in Environments with Partially Known Semantics

We study the problem of reinforcement learning for a task encoded by a r...
research
08/03/2013

Universal Empathy and Ethical Bias for Artificial General Intelligence

Rational agents are usually built to maximize rewards. However, AGI agen...
research
09/28/2021

A First-Occupancy Representation for Reinforcement Learning

Both animals and artificial agents benefit from state representations th...

Please sign up or login with your details

Forgot password? Click here to reset