Safe Reinforcement Learning From Pixels Using a Stochastic Latent Representation

10/02/2022
by   Yannick Hogewind, et al.
0

We address the problem of safe reinforcement learning from pixel observations. Inherent challenges in such settings are (1) a trade-off between reward optimization and adhering to safety constraints, (2) partial observability, and (3) high-dimensional observations. We formalize the problem in a constrained, partially observable Markov decision process framework, where an agent obtains distinct reward and safety signals. To address the curse of dimensionality, we employ a novel safety critic using the stochastic latent actor-critic (SLAC) approach. The latent variable model predicts rewards and safety violations, and we use the safety critic to train safe policies. Using well-known benchmark environments, we demonstrate competitive performance over existing approaches with respects to computational requirements, final reward return, and satisfying the safety constraints.

READ FULL TEXT
research
03/07/2023

A Multiplicative Value Function for Safe and Efficient Reinforcement Learning

An emerging field of sequential decision problems is safe Reinforcement ...
research
09/20/2019

Reconnaissance and Planning algorithm for constrained MDP

Practical reinforcement learning problems are often formulated as constr...
research
07/29/2021

Lyapunov-based uncertainty-aware safe reinforcement learning

Reinforcement learning (RL) has shown a promising performance in learnin...
research
05/25/2023

C-MCTS: Safe Planning with Monte Carlo Tree Search

Many real-world decision-making tasks, such as safety-critical scenarios...
research
08/09/2023

Improving Autonomous Separation Assurance through Distributed Reinforcement Learning with Attention Networks

Advanced Air Mobility (AAM) introduces a new, efficient mode of transpor...
research
07/02/2020

Verifiably Safe Exploration for End-to-End Reinforcement Learning

Deploying deep reinforcement learning in safety-critical settings requir...
research
12/28/2022

Certifying Safety in Reinforcement Learning under Adversarial Perturbation Attacks

Function approximation has enabled remarkable advances in applying reinf...

Please sign up or login with your details

Forgot password? Click here to reset