A Bit Better? Quantifying Information for Bandit Learning

02/18/2021
by   Adithya M. Devraj, et al.
0

The information ratio offers an approach to assessing the efficacy with which an agent balances between exploration and exploitation. Originally, this was defined to be the ratio between squared expected regret and the mutual information between the environment and action-observation pair, which represents a measure of information gain. Recent work has inspired consideration of alternative information measures, particularly for use in analysis of bandit learning algorithms to arrive at tighter regret bounds. We investigate whether quantification of information via such alternatives can improve the realized performance of information-directed sampling, which aims to minimize the information ratio.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/03/2021

Bandit Phase Retrieval

We study a bandit version of phase retrieval where the learner chooses a...
research
05/04/2022

Nonstationary Bandit Learning via Predictive Sampling

We propose predictive sampling as an approach to selecting actions that ...
research
02/09/2023

An Information-Theoretic Analysis of Nonstationary Bandit Learning

In nonstationary bandit learning problems, the decision-maker must conti...
research
01/29/2018

Information Directed Sampling and Bandits with Heteroscedastic Noise

In the stochastic bandit problem, the goal is to maximize an unknown fun...
research
01/06/2022

Gaussian Imagination in Bandit Learning

Assuming distributions are Gaussian often facilitates computations that ...
research
01/28/2023

STEERING: Stein Information Directed Exploration for Model-Based Reinforcement Learning

Directed Exploration is a crucial challenge in reinforcement learning (R...

Please sign up or login with your details

Forgot password? Click here to reset