Algorithms for Non-Stationary Generalized Linear Bandits

03/23/2020
by   Yoan Russac, et al.
0

The statistical framework of Generalized Linear Models (GLM) can be applied to sequential problems involving categorical or ordinal rewards associated, for instance, with clicks, likes or ratings. In the example of binary rewards, logistic regression is well-known to be preferable to the use of standard linear modeling. Previous works have shown how to deal with GLMs in contextual online learning with bandit feedback when the environment is assumed to be stationary. In this paper, we relax this latter assumption and propose two upper confidence bound based algorithms that make use of either a sliding window or a discounted maximum-likelihood estimator. We provide theoretical guarantees on the behavior of these algorithms for general context sequences and in the presence of abrupt changes. These results take the form of high probability upper bounds for the dynamic regret that are of order d^2/3 G^1/3 T^2/3 , where d, T and G are respectively the dimension of the unknown parameter, the number of rounds and the number of breakpoints up to time T. The empirical performance of the algorithms is illustrated in simulated environments.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/19/2019

Weighted Linear Bandits for Non-Stationary Environments

We consider a stochastic linear bandit model in which the available acti...
research
02/28/2017

Provably Optimal Algorithms for Generalized Linear Contextual Bandits

Contextual bandits are widely used in Internet services from news recomm...
research
11/02/2020

Self-Concordant Analysis of Generalized Linear Bandits with Forgetting

Contextual sequential decision problems with categorical or numerical ob...
research
03/21/2021

UCB-based Algorithms for Multinomial Logistic Regression Bandits

Out of the rich family of generalized linear bandits, perhaps the most w...
research
06/07/2021

On Learning to Rank Long Sequences with Contextual Bandits

Motivated by problems of learning to rank long item sequences, we introd...
research
01/16/2017

Thompson Sampling For Stochastic Bandits with Graph Feedback

We present a novel extension of Thompson Sampling for stochastic sequent...
research
02/05/2019

The Generalized Likelihood Ratio Test meets klUCB: an Improved Algorithm for Piece-Wise Non-Stationary Bandits

We propose a new algorithm for the piece-wise non-stationary bandit pro...

Please sign up or login with your details

Forgot password? Click here to reset