Online Learning in Contextual Bandits using Gated Linear Networks

02/21/2020
by   Eren Sezener, et al.
1

We introduce a new and completely online contextual bandit algorithm called Gated Linear Contextual Bandits (GLCB). This algorithm is based on Gated Linear Networks (GLNs), a recently introduced deep learning architecture with properties well-suited to the online setting. Leveraging data-dependent gating properties of the GLN we are able to estimate prediction uncertainty with effectively zero algorithmic overhead. We empirically evaluate GLCB compared to 9 state-of-the-art algorithms that leverage deep neural networks, on a standard benchmark suite of discrete and continuous contextual bandit problems. GLCB obtains median first-place despite being the only online method, and we further support these results with a theoretical study of its convergence properties.

READ FULL TEXT

page 3

page 13

research
06/10/2020

Gaussian Gated Linear Networks

We propose the Gaussian Gated Linear Network (G-GLN), an extension to th...
research
03/20/2019

Contextual Bandits with Random Projection

Contextual bandits with linear payoffs, which are also known as linear b...
research
05/22/2022

Contextual Information-Directed Sampling

Information-directed sampling (IDS) has recently demonstrated its potent...
research
12/15/2018

Balanced Linear Contextual Bandits

Contextual bandit algorithms are sensitive to the estimation method of t...
research
09/30/2019

Gated Linear Networks

This paper presents a family of backpropagation-free neural architecture...
research
10/12/2022

Maximum entropy exploration in contextual bandits with neural networks and energy based models

Contextual bandits can solve a huge range of real-world problems. Howeve...
research
12/01/2021

Efficient Online Bayesian Inference for Neural Bandits

In this paper we present a new algorithm for online (sequential) inferen...

Please sign up or login with your details

Forgot password? Click here to reset