Near-optimal Oracle-efficient Algorithms for Stationary and Non-Stationary Stochastic Linear Bandits

12/11/2019
by   Baekjin Kim, et al.
17

We investigate the design of two algorithms that enjoy not only computational efficiency induced by Hannan's perturbation approach, but also minimax-optimal regret bounds in linear bandit problems where the learner has access to an offline optimization oracle. We present an algorithm called Follow-The-Gaussian-Perturbed Leader (FTGPL) for stationary linear bandit where each action is associated with a d-dimensional feature vector, and prove that FTGPL (1) achieves the minimax-optimal Õ(d√(T)) regret, (2) matches the empirical performance of Linear Thompson Sampling, and (3) can be efficiently implemented even in the case of infinite actions, thus achieving the best of three worlds. Furthermore, it firmly solves an open problem raised in <cit.>, which perturbation achieves minimax-optimality in Linear Thompson Sampling. The weighted variant with exponential discounting, Discounted Follow-The-Gaussian-Perturbed Leader (D-FTGPL) is proposed to gracefully adjust to non-stationary environment where unknown parameter is time-varying within total variation B_T. It asymptotically achieves optimal dynamic regret Õ( d ^2/3B_T^1/3 T^2/3) and is oracle-efficient due to access to an offline optimization oracle induced by Gaussian perturbation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/19/2019

Online Non-Convex Learning: Following the Perturbed Leader is Optimal

We study the problem of online learning with non-convex losses, where th...
research
03/09/2021

Non-stationary Linear Bandits Revisited

In this note, we revisit non-stationary linear bandits, a variant of sto...
research
02/10/2020

Combinatorial Semi-Bandit in the Non-Stationary Environment

In this paper, we investigate the non-stationary combinatorial semi-band...
research
06/13/2020

Follow the Perturbed Leader: Optimism and Fast Parallel Algorithms for Smooth Minimax Games

We consider the problem of online learning and its application to solvin...
research
05/26/2022

Follow-the-Perturbed-Leader for Adversarial Markov Decision Processes with Bandit Feedback

We consider regret minimization for Adversarial Markov Decision Processe...
research
10/17/2022

Adaptive Oracle-Efficient Online Learning

The classical algorithms for online learning and decision-making have th...
research
02/26/2017

Kiefer Wolfowitz Algorithm is Asymptotically Optimal for a Class of Non-Stationary Bandit Problems

We consider the problem of designing an allocation rule or an "online le...

Please sign up or login with your details

Forgot password? Click here to reset