
Causal Bandits with Propagating Inference
Bandit is a framework for designing sequential experiments. In each expe...
read it

Causal Bandits: Learning Good Interventions via Causal Inference
We study the problem of using causal models to improve the rate at which...
read it

Budgeted and Nonbudgeted Causal Bandits
Learning good interventions in a causal graph can be modelled as a stoch...
read it

Linear Bandit algorithms using the Bootstrap
This study presents two new algorithms for solving linear stochastic ban...
read it

Causal Bandits with Unknown Graph Structure
In causal bandit problems, the action set consists of interventions on v...
read it

Combining Offline Causal Inference and Online Bandit Learning for Data Driven Decisions
A fundamental question for companies is: How to make good decisions with...
read it

Hierarchical Causal Bandit
Causal bandit is a nascent learning model where an agent sequentially ex...
read it
Regret Analysis of Causal Bandit Problems
We study how to learn optimal interventions sequentially given causal information represented as a causal graph along with associated conditional distributions. Causal modeling is useful in real world problems like online advertisement where complex causal mechanisms underlie the relationship between interventions and outcomes. We propose two algorithms, causal upper confidence bound (CUCB) and causal Thompson Sampling (CTS), that enjoy improved cumulative regret bounds compared with algorithms that do not use causal information. We thus resolve an open problem posed by <cit.>. Further, we extend CUCB and CTS to the linear bandit setting and propose causal linear UCB (CLUCB) and causal linear TS (CLTS) algorithms. These algorithms enjoy a cumulative regret bound that only scales with the feature dimension. Our experiments show the benefit of using causal information. For example, we observe that even with a few hundreds of iterations, the regret of causal algorithms is less than that of standard algorithms by a factor of three. We also show that under certain causal structures, our algorithms scale better than the standard bandit algorithms as the number of interventions increases.
READ FULL TEXT
Comments
There are no comments yet.