Scalable Representation Learning in Linear Contextual Bandits with Constant Regret Guarantees

10/24/2022
by   Andrea Tirinzoni, et al.
0

We study the problem of representation learning in stochastic contextual linear bandits. While the primary concern in this domain is usually to find realizable representations (i.e., those that allow predicting the reward function at any context-action pair exactly), it has been recently shown that representations with certain spectral properties (called HLS) may be more effective for the exploration-exploitation task, enabling LinUCB to achieve constant (i.e., horizon-independent) regret. In this paper, we propose BanditSRL, a representation learning algorithm that combines a novel constrained optimization problem to learn a realizable representation with good spectral properties with a generalized likelihood ratio test to exploit the recovered representation and avoid excessive exploration. We prove that BanditSRL can be paired with any no-regret algorithm and achieve constant regret whenever an HLS representation is available. Furthermore, BanditSRL can be easily combined with deep neural networks and we show how regularizing towards HLS representations is beneficial in standard benchmarks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/19/2022

On the Complexity of Representation Learning in Contextual Linear Bandits

In contextual linear bandits, the reward function is assumed to be a lin...
research
04/08/2021

Leveraging Good Representations in Linear Contextual Bandits

The linear contextual bandit literature is mostly focused on the design ...
research
12/03/2020

Neural Contextual Bandits with Deep Representation and Shallow Exploration

We study a general class of contextual bandits, where each context-actio...
research
02/08/2021

Near-optimal Representation Learning for Linear Bandits and Linear RL

This paper studies representation learning for multi-task linear bandits...
research
02/11/2022

Efficient Kernel UCB for Contextual Bandits

In this paper, we tackle the computational efficiency of kernelized UCB ...
research
02/23/2022

Truncated LinUCB for Stochastic Linear Bandits

This paper considers contextual bandits with a finite number of arms, wh...
research
10/08/2020

Reward-Biased Maximum Likelihood Estimation for Linear Stochastic Bandits

Modifying the reward-biased maximum likelihood method originally propose...

Please sign up or login with your details

Forgot password? Click here to reset