-
Conservative Exploration in Reinforcement Learning
While learning in an unknown Markov Decision Process (MDP), an agent sho...
read it
-
Near Optimal Exploration-Exploitation in Non-Communicating Markov Decision Processes
While designing the state space of an MDP, it is common to include state...
read it
-
A Sample-Efficient Algorithm for Episodic Finite-Horizon MDP with Constraints
Constrained Markov Decision Processes (CMDPs) formalize sequential decis...
read it
-
Provably Efficient Safe Exploration via Primal-Dual Policy Optimization
We study the Safe Reinforcement Learning (SRL) problem using the Constra...
read it
-
Model-Free Algorithm and Regret Analysis for MDPs with Peak Constraints
In the optimization of dynamic systems, the variables typically have con...
read it
-
Verifiable Planning in Expected Reward Multichain MDPs
The planning domain has experienced increased interest in the formal syn...
read it
-
RBED: Reward Based Epsilon Decay
ε-greedy is a policy used to balance exploration and exploitation in man...
read it
Exploration-Exploitation in Constrained MDPs
In many sequential decision-making problems, the goal is to optimize a utility function while satisfying a set of constraints on different utilities. This learning problem is formalized through Constrained Markov Decision Processes (CMDPs). In this paper, we investigate the exploration-exploitation dilemma in CMDPs. While learning in an unknown CMDP, an agent should trade-off exploration to discover new information about the MDP, and exploitation of the current knowledge to maximize the reward while satisfying the constraints. While the agent will eventually learn a good or optimal policy, we do not want the agent to violate the constraints too often during the learning process. In this work, we analyze two approaches for learning in CMDPs. The first approach leverages the linear formulation of CMDP to perform optimistic planning at each episode. The second approach leverages the dual formulation (or saddle-point formulation) of CMDP to perform incremental, optimistic updates of the primal and dual variables. We show that both achieves sublinear regret w.r.t. the main utility while having a sublinear regret on the constraint violations. That being said, we highlight a crucial difference between the two approaches; the linear programming approach results in stronger guarantees than in the dual formulation based approach.
READ FULL TEXT
Comments
There are no comments yet.