Dual Instrumental Method for Confounded Kernelized Bandits

09/07/2022
by   Xueping Gong, et al.
0

The contextual bandit problem is a theoretically justified framework with wide applications in various fields. While the previous study on this problem usually requires independence between noise and contexts, our work considers a more sensible setting where the noise becomes a latent confounder that affects both contexts and rewards. Such a confounded setting is more realistic and could expand to a broader range of applications. However, the unresolved confounder will cause a bias in reward function estimation and thus lead to a large regret. To deal with the challenges brought by the confounder, we apply the dual instrumental variable regression, which can correctly identify the true reward function. We prove the convergence rate of this method is near-optimal in two types of widely used reproducing kernel Hilbert spaces. Therefore, we can design computationally efficient and regret-optimal algorithms based on the theoretical guarantees for confounded bandit problems. The numerical results illustrate the efficacy of our proposed algorithms in the confounded bandit setting.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/29/2021

Regularized OFU: an Efficient UCB Estimator forNon-linear Contextual Bandit

Balancing exploration and exploitation (EE) is a fundamental problem in ...
research
02/01/2020

Efficient and Robust Algorithms for Adversarial Linear Contextual Bandits

We consider an adversarial variant of the classic K-armed linear context...
research
02/26/2021

Adapting to misspecification in contextual bandits with offline regression oracles

Computationally efficient contextual bandits are often based on estimati...
research
03/28/2020

Bypassing the Monster: A Faster and Simpler Optimal Algorithm for Contextual Bandits under Realizability

We consider the general (stochastic) contextual bandit problem under the...
research
07/13/2022

Graph Neural Network Bandits

We consider the bandit optimization problem with the reward function def...
research
02/23/2023

Reward Learning as Doubly Nonparametric Bandits: Optimal Design and Scaling Laws

Specifying reward functions for complex tasks like object manipulation o...
research
10/22/2020

Thresholded LASSO Bandit

In this paper, we revisit sparse stochastic contextual linear bandits. I...

Please sign up or login with your details

Forgot password? Click here to reset