A New Algorithm for Non-stationary Contextual Bandits: Efficient, Optimal, and Parameter-free

02/03/2019
by   Yifang Chen, et al.
0

We propose the first contextual bandit algorithm that is parameter-free, efficient, and optimal in terms of dynamic regret. Specifically, our algorithm achieves dynamic regret O({√(ST), Δ^1/3T^2/3}) for a contextual bandit problem with T rounds, S switches and Δ total variation in data distributions. Importantly, our algorithm is adaptive and does not need to know S or Δ ahead of time, and can be implemented efficiently assuming access to an ERM oracle. Our results strictly improve the O({S^1/4T^3/4, Δ^1/5T^4/5}) bound of (Luo et al., 2018), and greatly generalize and improve the O(√(ST)) result of (Auer et al, 2018) that holds only for the two-armed bandit problem without contextual information. The key novelty of our algorithm is to introduce replay phases, in which the algorithm acts according to its previous decisions for a certain amount of time in order to detect non-stationarity while maintaining a good balance between exploration and exploitation.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro