Online learning in MDPs with linear function approximation and bandit feedback

07/03/2020
by   Gergely Neu, et al.
0

We consider an online learning problem where the learner interacts with a Markov decision process in a sequence of episodes, where the reward function is allowed to change between episodes in an adversarial manner and the learner only gets to observe the rewards associated with its actions. We allow the state space to be arbitrarily large, but we assume that all action-value functions can be represented as linear functions in terms of a known low-dimensional feature map, and that the learner has access to a simulator of the environment that allows generating trajectories from the true MDP dynamics. Our main contribution is developing a computationally efficient algorithm that we call MDP-LinExp3, and prove that its regret is bounded by 𝒪(H^2 T^2/3 (dK)^1/3), where T is the number of episodes, H is the number of steps in each episode, K is the number of actions, and d is the dimension of the feature map. We also show that the regret can be improved to 𝒪(H^2 √(TdK)) under much stronger assumptions on the MDP dynamics. To our knowledge, MDP-LinExp3 is the first provably efficient algorithm for this problem setting.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/26/2014

Online learning in MDPs with side information

We study online learning of finite Markov decision process (MDP) problem...
research
01/30/2023

Improved Regret for Efficient Online Reinforcement Learning with Linear Function Approximation

We study reinforcement learning with linear function approximation and a...
research
05/25/2019

Large Scale Markov Decision Processes with Changing Rewards

We consider Markov Decision Processes (MDPs) where the rewards are unkno...
research
02/14/2023

Improved Regret Bounds for Linear Adversarial MDPs via Linear Optimization

Learning Markov decision processes (MDP) in an adversarial environment h...
research
01/31/2021

Online Markov Decision Processes with Aggregate Bandit Feedback

We study a novel variant of online finite-horizon Markov Decision Proces...
research
01/31/2022

Cooperative Online Learning in Stochastic and Adversarial MDPs

We study cooperative online learning in stochastic and adversarial Marko...
research
03/02/2023

Efficient Rate Optimal Regret for Adversarial Contextual MDPs Using Online Function Approximation

We present the OMG-CMDP! algorithm for regret minimization in adversaria...

Please sign up or login with your details

Forgot password? Click here to reset