Causal Bandits on General Graphs

07/06/2021
by   Aurghya Maiti, et al.
0

We study the problem of determining the best intervention in a Causal Bayesian Network (CBN) specified only by its causal graph. We model this as a stochastic multi-armed bandit (MAB) problem with side-information, where the interventions correspond to the arms of the bandit instance. First, we propose a simple regret minimization algorithm that takes as input a semi-Markovian causal graph with atomic interventions and possibly unobservable variables, and achieves Õ(√(M/T)) expected simple regret, where M is dependent on the input CBN and could be very small compared to the number of arms. We also show that this is almost optimal for CBNs described by causal graphs having an n-ary tree structure. Our simple regret minimization results, both upper and lower bound, subsume previous results in the literature, which assumed additional structural restrictions on the input causal graph. In particular, our results indicate that the simple regret guarantee of our proposed algorithm can only be improved by considering more nuanced structural restrictions on the causal graph. Next, we propose a cumulative regret minimization algorithm that takes as input a general causal graph with all observable nodes and atomic interventions and performs better than the optimal MAB algorithm that does not take causal side-information into account. We also experimentally compare both our algorithms with the best known algorithms in the literature. To the best of our knowledge, this work gives the first simple and cumulative regret minimization algorithms for CBNs with general causal graphs under atomic interventions and having unobserved confounders.

READ FULL TEXT

Authors

page 1

page 2

page 3

page 4

12/13/2020

Budgeted and Non-budgeted Causal Bandits

Learning good interventions in a causal graph can be modelled as a stoch...
06/06/2018

Causal Bandits with Propagating Inference

Bandit is a framework for designing sequential experiments. In each expe...
11/01/2021

Intervention Efficient Algorithm for Two-Stage Causal MDPs

We study Markov Decision Processes (MDP) wherein states correspond to ca...
06/06/2021

Collaborative Causal Discovery with Atomic Interventions

We introduce a new Collaborative Causal Discovery problem, through which...
02/10/2022

Adaptively Exploiting d-Separators with Causal Bandits

Multi-armed bandit problems provide a framework to identify the optimal ...
06/13/2012

Identifying Optimal Sequential Decisions

We consider conditions that allow us to find an optimal strategy for seq...
01/10/2017

Identifying Best Interventions through Online Importance Sampling

Motivated by applications in computational advertising and systems biolo...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.