Counterfactual Optimism: Rate Optimal Regret for Stochastic Contextual MDPs

11/27/2022
by   Orin Levy, et al.
0

We present the UC^3RL algorithm for regret minimization in Stochastic Contextual MDPs (CMDPs). The algorithm operates under the minimal assumptions of realizable function class, and access to offline least squares and log loss regression oracles. Our algorithm is efficient (assuming efficient offline regression oracles) and enjoys an O(H^3 √(T |S| |A|(log (|ℱ|/δ) + log (|𝒫|/ δ) ))) regret guarantee, with T being the number of episodes, S the state space, A the action space, H the horizon, and 𝒫 and ℱ are finite function classes, used to approximate the context-dependent dynamics and rewards, respectively. To the best of our knowledge, our algorithm is the first efficient and rate-optimal regret minimization algorithm for CMDPs, which operates under the general offline function approximation setting.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/02/2023

Efficient Rate Optimal Regret for Adversarial Contextual MDPs Using Online Function Approximation

We present the OMG-CMDP! algorithm for regret minimization in adversaria...
research
07/22/2022

Optimism in Face of a Context: Regret Guarantees for Stochastic Contextual MDP

We present regret minimization algorithms for stochastic contextual MDPs...
research
03/28/2020

Bypassing the Monster: A Faster and Simpler Optimal Algorithm for Contextual Bandits under Realizability

We consider the general (stochastic) contextual bandit problem under the...
research
03/02/2022

Learning Efficiently Function Approximation for Contextual MDP

We study learning contextual MDPs using a function approximation for bot...
research
09/09/2020

Improved Exploration in Factored Average-Reward MDPs

We consider a regret minimization task under the average-reward criterio...
research
06/15/2022

Corruption-Robust Contextual Search through Density Updates

We study the problem of contextual search in the adversarial noise model...
research
11/01/2021

Intervention Efficient Algorithm for Two-Stage Causal MDPs

We study Markov Decision Processes (MDP) wherein states correspond to ca...

Please sign up or login with your details

Forgot password? Click here to reset