Online learning in MDPs with side information

06/26/2014
by   Yasin Abbasi-Yadkori, et al.
0

We study online learning of finite Markov decision process (MDP) problems when a side information vector is available. The problem is motivated by applications such as clinical trials, recommendation systems, etc. Such applications have an episodic structure, where each episode corresponds to a patient/customer. Our objective is to compete with the optimal dynamic policy that can take side information into account. We propose a computationally efficient algorithm and show that its regret is at most O(√(T)), where T is the number of rounds. To best of our knowledge, this is the first regret bound for this setting.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/03/2021

Online Learning in Adversarial MDPs: Is the Communicating Case Harder than Ergodic?

We study online learning in adversarial communicating Markov Decision Pr...
research
07/03/2020

Online learning in MDPs with linear function approximation and bandit feedback

We consider an online learning problem where the learner interacts with ...
research
02/12/2022

Online Bayesian Recommendation with No Regret

We introduce and study the online Bayesian recommendation problem for a ...
research
06/02/2022

Incrementality Bidding via Reinforcement Learning under Mixed and Delayed Rewards

Incrementality, which is used to measure the causal effect of showing an...
research
01/31/2022

Cooperative Online Learning in Stochastic and Adversarial MDPs

We study cooperative online learning in stochastic and adversarial Marko...
research
01/11/2023

Online Hyperparameter Optimization for Class-Incremental Learning

Class-incremental learning (CIL) aims to train a classification model wh...
research
10/26/2018

Accumulating Knowledge for Lifelong Online Learning

Lifelong learning can be viewed as a continuous transfer learning proced...

Please sign up or login with your details

Forgot password? Click here to reset