Regret Analysis of Causal Bandit Problems

10/11/2019
by   Yangyi Lu, et al.
18

We study how to learn optimal interventions sequentially given causal information represented as a causal graph along with associated conditional distributions. Causal modeling is useful in real world problems like online advertisement where complex causal mechanisms underlie the relationship between interventions and outcomes. We propose two algorithms, causal upper confidence bound (C-UCB) and causal Thompson Sampling (C-TS), that enjoy improved cumulative regret bounds compared with algorithms that do not use causal information. We thus resolve an open problem posed by <cit.>. Further, we extend C-UCB and C-TS to the linear bandit setting and propose causal linear UCB (CL-UCB) and causal linear TS (CL-TS) algorithms. These algorithms enjoy a cumulative regret bound that only scales with the feature dimension. Our experiments show the benefit of using causal information. For example, we observe that even with a few hundreds of iterations, the regret of causal algorithms is less than that of standard algorithms by a factor of three. We also show that under certain causal structures, our algorithms scale better than the standard bandit algorithms as the number of interventions increases.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/06/2018

Causal Bandits with Propagating Inference

Bandit is a framework for designing sequential experiments. In each expe...
research
06/10/2016

Causal Bandits: Learning Good Interventions via Causal Inference

We study the problem of using causal models to improve the rate at which...
research
08/07/2023

Provably Efficient Learning in Partially Observable Contextual Bandit

In this paper, we investigate transfer learning in partially observable ...
research
08/26/2022

Causal Bandits for Linear Structural Equation Models

This paper studies the problem of designing an optimal sequence of inter...
research
05/04/2016

Linear Bandit algorithms using the Bootstrap

This study presents two new algorithms for solving linear stochastic ban...
research
05/08/2023

Learning Good Interventions in Causal Graphs via Covering

We study the causal bandit problem that entails identifying a near-optim...
research
02/10/2022

Adaptively Exploiting d-Separators with Causal Bandits

Multi-armed bandit problems provide a framework to identify the optimal ...

Please sign up or login with your details

Forgot password? Click here to reset