Upper Confidence Primal-Dual Optimization: Stochastically Constrained Markov Decision Processes with Adversarial Losses and Unknown Transitions

by   Shuang Qiu, et al.

We consider online learning for episodic Markov decision processes (MDPs) with stochastic long-term budget constraints, which plays a central role in ensuring the safety of reinforcement learning. Here the loss function can vary arbitrarily across the episodes, whereas both the loss received and the budget consumption are revealed at the end of each episode. Previous works solve this problem under the restrictive assumption that the transition model of the MDP is known a priori and establish regret bounds that depend polynomially on the cardinalities of the state space S and the action space A. In this work, we propose a new upper confidence primal-dual algorithm, which only requires the trajectories sampled from the transition model. In particular, we prove that the proposed algorithm achieves Õ(L|S|√(|A|T)) upper bounds of both the regret and the constraint violation, where L is the length of each episode. Our analysis incorporates a new high-probability drift analysis of Lagrange multiplier processes into the celebrated regret analysis of upper confidence reinforcement learning, which demonstrates the power of "optimism in the face of uncertainty" in constrained online learning.


page 1

page 2

page 3

page 4


Online Convex Optimization in Adversarial Markov Decision Processes

We consider online learning in episodic loop-free Markov decision proces...

Constrained Upper Confidence Reinforcement Learning

Constrained Markov Decision Processes are a class of stochastic decision...

Cancellation-Free Regret Bounds for Lagrangian Approaches in Constrained Markov Decision Processes

Constrained Markov Decision Processes (CMDPs) are one of the common ways...

Constrained Policy Optimization via Bayesian World Models

Improving sample-efficiency and safety are crucial challenges when deplo...

Joint Online Learning and Decision-making via Dual Mirror Descent

We consider an online revenue maximization problem over a finite time ho...

No-Regret Online Reinforcement Learning with Adversarial Losses and Transitions

Existing online learning algorithms for adversarial Markov Decision Proc...

Contextual Markov Decision Processes using Generalized Linear Models

We consider the recently proposed reinforcement learning (RL) framework ...

Please sign up or login with your details

Forgot password? Click here to reset