Equipping Experts/Bandits with Long-term Memory

05/30/2019
by   Kai Zheng, et al.
0

We propose the first reduction-based approach to obtaining long-term memory guarantees for online learning in the sense of Bousquet and Warmuth, 2002, by reducing the problem to achieving typical switching regret. Specifically, for the classical expert problem with K actions and T rounds, using our framework we develop various algorithms with a regret bound of order O(√(T(S T + n K))) compared to any sequence of experts with S-1 switches among n ≤{S, K} distinct experts. In addition, by plugging specific adaptive algorithms into our framework we also achieve the best of both stochastic and adversarial environments simultaneously. This resolves an open problem of Warmuth and Koolen, 2014. Furthermore, we extend our results to the sparse multi-armed bandit setting and show both negative and positive results for long-term memory guarantees. As a side result, our lower bound also implies that sparse losses do not help improve the worst-case regret for contextual bandits, a sharp contrast with the non-contextual case.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/27/2022

Bounded Memory Adversarial Bandits with Composite Anonymous Delayed Feedback

We study the adversarial bandit problem with composite anonymous delayed...
research
08/17/2020

Online Multitask Learning with Long-Term Memory

We introduce a novel online multitask setting. In this setting each task...
research
06/24/2021

Improved Regret Bounds for Tracking Experts with Memory

We address the problem of sequential prediction with expert advice in a ...
research
02/09/2018

Make the Minority Great Again: First-Order Regret Bound for Contextual Bandits

Regret bounds in online learning compare the player's performance to L^*...
research
03/05/2018

Online learning over a finite action set with limited switching

This paper studies the value of switching actions in the Prediction From...
research
02/23/2018

Contextual Bandits with Stochastic Experts

We consider the problem of contextual bandits with stochastic experts, w...
research
02/24/2020

Fair Bandit Learning with Delayed Impact of Actions

Algorithmic fairness has been studied mostly in a static setting where t...

Please sign up or login with your details

Forgot password? Click here to reset