From Bandits to Experts: On the Value of Side-Observations

06/13/2011
by   Shie Mannor, et al.
0

We consider an adversarial online learning setting where a decision maker can choose an action in every stage of the game. In addition to observing the reward of the chosen action, the decision maker gets side observations on the reward he would have obtained had he chosen some of the other actions. The observation structure is encoded as a graph, where node i is linked to node j if sampling i provides information on the reward of j. This setting naturally interpolates between the well-known "experts" setting, where the decision maker can view all rewards, and the multi-armed bandits setting, where the decision maker can only view the reward of the chosen action. We develop practical algorithms with provable regret guarantees, which depend on non-trivial graph-theoretic properties of the information feedback structure. We also provide partially-matching lower bounds.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/30/2014

Nonstochastic Multi-Armed Bandits with Graph-Structured Feedback

We present and study a partial-information model of online learning, whe...
research
10/21/2022

Anonymous Bandits for Multi-User Systems

In this work, we present and study a new framework for online learning i...
research
09/07/2021

Online Learning for Cooperative Multi-Player Multi-Armed Bandits

We introduce a framework for decentralized online learning for multi-arm...
research
12/10/2020

Adversarial Linear Contextual Bandits with Graph-Structured Side Observations

This paper studies the adversarial graphical contextual bandits, a varia...
research
08/04/2015

Staged Multi-armed Bandits

In this paper, we introduce a new class of reinforcement learning method...
research
07/14/2020

Optimal Learning for Structured Bandits

We study structured multi-armed bandits, which is the problem of online ...
research
07/17/2013

From Bandits to Experts: A Tale of Domination and Independence

We consider the partial observability model for multi-armed bandits, int...

Please sign up or login with your details

Forgot password? Click here to reset