Accelerated Primal-Dual Policy Optimization for Safe Reinforcement Learning

02/19/2018
by   Qingkai Liang, et al.
0

Constrained Markov Decision Process (CMDP) is a natural framework for reinforcement learning tasks with safety constraints, where agents learn a policy that maximizes the long-term reward while satisfying the constraints on the long-term cost. A canonical approach for solving CMDPs is the primal-dual method which updates parameters in primal and dual spaces in turn. Existing methods for CMDPs only use on-policy data for dual updates, which results in sample inefficiency and slow convergence. In this paper, we propose a policy search method for CMDPs called Accelerated Primal-Dual Optimization (APDO), which incorporates an off-policy trained dual variable in the dual update procedure while updating the policy in primal space with on-policy likelihood ratio gradient. Experimental results on a simulated robot locomotion task show that APDO achieves better sample efficiency and faster convergence than state-of-the-art approaches for CMDPs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/11/2020

A Primal Approach to Constrained Policy Optimization: Global Optimality and Finite-Time Analysis

Safe reinforcement learning (SRL) problems are typically modeled as cons...
research
11/29/2022

Interpreting Primal-Dual Algorithms for Constrained MARL

Constrained multiagent reinforcement learning (C-MARL) is gaining import...
research
06/20/2023

Last-Iterate Convergent Policy Gradient Primal-Dual Methods for Constrained MDPs

We study the problem of computing an optimal policy of an infinite-horiz...
research
07/19/2021

Revisiting the Primal-Dual Method of Multipliers for Optimisation over Centralised Networks

The primal-dual method of multipliers (PDMM) was originally designed for...
research
02/27/2020

Cautious Reinforcement Learning via Distributional Risk in the Dual Domain

We study the estimation of risk-sensitive policies in reinforcement lear...
research
06/30/2011

Dual Modelling of Permutation and Injection Problems

When writing a constraint program, we have to choose which variables sho...
research
02/21/2022

Accelerating Primal-dual Methods for Regularized Markov Decision Processes

Entropy regularized Markov decision processes have been widely used in r...

Please sign up or login with your details

Forgot password? Click here to reset