Learning Efficiently Function Approximation for Contextual MDP

03/02/2022
by   Orin Levy, et al.
0

We study learning contextual MDPs using a function approximation for both the rewards and the dynamics. We consider both the case where the dynamics is known and unknown, and the case that the dynamics dependent or independent of the context. For all four models we derive polynomial sample and time complexity (assuming an efficient ERM oracle). Our methodology gives a general reduction from learning contextual MDP to supervised learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/22/2022

Optimism in Face of a Context: Regret Guarantees for Stochastic Contextual MDP

We present regret minimization algorithms for stochastic contextual MDPs...
research
03/02/2023

Efficient Rate Optimal Regret for Adversarial Contextual MDPs Using Online Function Approximation

We present the OMG-CMDP! algorithm for regret minimization in adversaria...
research
11/27/2022

Counterfactual Optimism: Rate Optimal Regret for Stochastic Contextual MDPs

We present the UC^3RL algorithm for regret minimization in Stochastic Co...
research
06/17/2020

A maximum-entropy approach to off-policy evaluation in average-reward MDPs

This work focuses on off-policy evaluation (OPE) with function approxima...
research
04/27/2015

Algorithms with Logarithmic or Sublinear Regret for Constrained Contextual Bandits

We study contextual bandits with budget and time constraints, referred t...
research
10/10/2022

Generalized Optimality Guarantees for Solving Continuous Observation POMDPs through Particle Belief MDP Approximation

Partially observable Markov decision processes (POMDPs) provide a flexib...
research
05/31/2022

Provable General Function Class Representation Learning in Multitask Bandits and MDPs

While multitask representation learning has become a popular approach in...

Please sign up or login with your details

Forgot password? Click here to reset