Anytime Model Selection in Linear Bandits

07/24/2023
by   Parnian Kassraie, et al.
0

Model selection in the context of bandit optimization is a challenging problem, as it requires balancing exploration and exploitation not only for action selection, but also for model selection. One natural approach is to rely on online learning algorithms that treat different models as experts. Existing methods, however, scale poorly (polyM) with the number of models M in terms of their regret. Our key insight is that, for model selection in linear bandits, we can emulate full-information feedback to the online learner with a favorable bias-variance trade-off. This allows us to develop ALEXP, which has an exponentially improved (log M) dependence on M for its regret. ALEXP has anytime guarantees on its regret, and neither requires knowledge of the horizon n, nor relies on an initial purely exploratory stage. Our approach utilizes a novel time-uniform analysis of the Lasso, establishing a new connection between online learning and high-dimensional statistics.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/11/2021

Optimal Model Selection in Contextual Bandits with Many Classes via Offline Oracles

We study the problem of model selection for contextual bandits, in which...
research
06/05/2023

Data-Driven Regret Balancing for Online Model Selection in Bandits

We consider model selection for sequential decision making in stochastic...
research
06/19/2020

Open Problem: Model Selection for Contextual Bandits

In statistical learning, algorithms for model selection allow the learne...
research
06/09/2020

Regret Balancing for Bandit and RL Model Selection

We consider model selection in stochastic bandit and reinforcement learn...
research
12/04/2017

Episodic memory for continual model learning

Both the human brain and artificial learning agents operating in real-wo...
research
06/09/2021

Parameter and Feature Selection in Stochastic Linear Bandits

We study two model selection settings in stochastic linear bandits (LB)....
research
02/16/2023

Infinite Action Contextual Bandits with Reusable Data Exhaust

For infinite action contextual bandits, smoothed regret and reduction to...

Please sign up or login with your details

Forgot password? Click here to reset