EE-Net: Exploitation-Exploration Neural Networks in Contextual Bandits

10/07/2021
by   Yikun Ban, et al.
0

Contextual multi-armed bandits have been studied for decades and adapted to various applications such as online advertising and personalized recommendation. To solve the exploitation-exploration tradeoff in bandits, there are three main techniques: epsilon-greedy, Thompson Sampling (TS), and Upper Confidence Bound (UCB). In recent literature, linear contextual bandits have adopted ridge regression to estimate the reward function and combine it with TS or UCB strategies for exploration. However, this line of works explicitly assumes the reward is based on a linear function of arm vectors, which may not be true in real-world datasets. To overcome this challenge, a series of neural-based bandit algorithms have been proposed, where a neural network is assigned to learn the underlying reward function and TS or UCB are adapted for exploration. In this paper, we propose "EE-Net", a neural-based bandit approach with a novel exploration strategy. In addition to utilizing a neural network (Exploitation network) to learn the reward function, EE-Net adopts another neural network (Exploration network) to adaptively learn potential gains compared to currently estimated reward. Then, a decision-maker is constructed to combine the outputs from the Exploitation and Exploration networks. We prove that EE-Net achieves 𝒪(√(Tlog T)) regret, which is tighter than existing state-of-the-art neural bandit algorithms (𝒪(√(T)log T) for both UCB-based and TS-based). Through extensive experiments on four real-world datasets, we show that EE-Net outperforms existing linear and neural bandit approaches.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/05/2023

Neural Exploitation and Exploration of Contextual Bandits

In this paper, we study utilizing neural networks for the exploitation a...
research
04/02/2020

Hierarchical Adaptive Contextual Bandits for Resource Constraint based Recommendation

Contextual multi-armed bandit (MAB) achieves cutting-edge performance on...
research
06/22/2021

Pure Exploration in Kernel and Neural Bandits

We study pure exploration in bandits, where the dimension of the feature...
research
07/18/2021

GuideBoot: Guided Bootstrap for Deep Contextual Bandits

The exploration/exploitation (E E) dilemma lies at the core of interac...
research
08/21/2023

Graph Neural Bandits

Contextual bandits algorithms aim to choose the optimal arm with the hig...
research
09/12/2017

Adaptive Exploration-Exploitation Tradeoff for Opportunistic Bandits

In this paper, we propose and study opportunistic bandits - a new varian...
research
10/11/2019

Privacy-Preserving Contextual Bandits

Contextual bandits are online learners that, given an input, select an a...

Please sign up or login with your details

Forgot password? Click here to reset