Additive Causal Bandits with Unknown Graph

06/13/2023
by   Alan Malek, et al.
0

We explore algorithms to select actions in the causal bandit setting where the learner can choose to intervene on a set of random variables related by a causal graph, and the learner sequentially chooses interventions and observes a sample from the interventional distribution. The learner's goal is to quickly find the intervention, among all interventions on observable variables, that maximizes the expectation of an outcome variable. We depart from previous literature by assuming no knowledge of the causal graph except that latent confounders between the outcome and its ancestors are not present. We first show that the unknown graph problem can be exponentially hard in the parents of the outcome. To remedy this, we adopt an additional additive assumption on the outcome which allows us to solve the problem by casting it as an additive combinatorial linear bandit problem with full-bandit feedback. We propose a novel action-elimination algorithm for this setting, show how to apply this algorithm to the causal bandit problem, provide sample complexity bounds, and empirically validate our findings on a suite of randomly generated causal models, effectively showing that one does not need to explicitly learn the parents of the outcome to identify the best intervention.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/06/2018

Causal Bandits with Propagating Inference

Bandit is a framework for designing sequential experiments. In each expe...
research
06/05/2021

Causal Bandits with Unknown Graph Structure

In causal bandit problems, the action set consists of interventions on v...
research
04/18/2018

Online Non-Additive Path Learning under Full and Partial Information

We consider the online path learning problem in a graph with non-additiv...
research
02/11/2020

Efficiently Learning and Sampling Interventional Distributions from Observations

We study the problem of efficiently estimating the effect of an interven...
research
09/08/2023

On the Actionability of Outcome Prediction

Predicting future outcomes is a prevalent application of machine learnin...
research
06/16/2022

Pure Exploration of Causal Bandits

Causal bandit problem integrates causal inference with multi-armed bandi...
research
06/04/2022

Combinatorial Causal Bandits

In combinatorial causal bandits (CCB), the learning agent chooses at mos...

Please sign up or login with your details

Forgot password? Click here to reset