An Efficient Algorithm For Generalized Linear Bandit: Online Stochastic Gradient Descent and Thompson Sampling

06/07/2020
by   Qin Ding, et al.
0

We consider the contextual bandit problem, where a player sequentially makes decisions based on past observations to maximize the cumulative reward. Although many algorithms have been proposed for contextual bandit, most of them rely on finding the maximum likelihood estimator at each iteration, which requires O(t) time at the t-th iteration and are memory inefficient. A natural way to resolve this problem is to apply online stochastic gradient descent (SGD) so that the per-step time and memory complexity can be reduced to constant with respect to t, but a contextual bandit policy based on online SGD updates that balances exploration and exploitation has remained elusive. In this work, we show that online SGD can be applied to the generalized linear bandit problem. The proposed SGD-TS algorithm, which uses a single-step SGD update to exploit past information and uses Thompson Sampling for exploration, achieves Õ(√(dT)) regret with the total time complexity that scales linearly in T and d, where T is the total number of rounds and d is the number of features. Experimental results show that SGD-TS consistently outperforms existing algorithms on both synthetic and real datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/07/2021

Generalized Linear Bandits with Local Differential Privacy

Contextual bandit algorithms are useful in personalized online decision-...
research
09/21/2020

Contextual Bandits for adapting to changing User preferences over time

Contextual bandits provide an effective way to model the dynamic data pr...
research
12/30/2022

Online Statistical Inference for Contextual Bandits via Stochastic Gradient Descent

With the fast development of big data, it has been easier than before to...
research
01/21/2023

Genetically Modified Wolf Optimization with Stochastic Gradient Descent for Optimising Deep Neural Networks

When training Convolutional Neural Networks (CNNs) there is a large emph...
research
06/30/2015

Online Learning to Sample

Stochastic Gradient Descent (SGD) is one of the most widely used techniq...
research
04/12/2021

An Efficient Algorithm for Deep Stochastic Contextual Bandits

In stochastic contextual bandit (SCB) problems, an agent selects an acti...
research
02/12/2015

Weighted SGD for ℓ_p Regression with Randomized Preconditioning

In recent years, stochastic gradient descent (SGD) methods and randomize...

Please sign up or login with your details

Forgot password? Click here to reset