Interventions and Counterfactuals in Tractable Probabilistic Models: Limitations of Contemporary Transformations

01/29/2020
by   Ioannis Papantonis, et al.
49

In recent years, there has been an increasing interest in studying causality-related properties in machine learning models generally, and in generative models in particular. While that is well motivated, it inherits the fundamental computational hardness of probabilistic inference, making exact reasoning intractable. Probabilistic tractable models have also recently emerged, which guarantee that conditional marginals can be computed in time linear in the size of the model, where the model is usually learned from data. Although initially limited to low tree-width models, recent tractable models such as sum product networks (SPNs) and probabilistic sentential decision diagrams (PSDDs) exploit efficient function representations and also capture high tree-width models. In this paper, we ask the following technical question: can we use the distributions represented or learned by these models to perform causal queries, such as reasoning about interventions and counterfactuals? By appealing to some existing ideas on transforming such models to Bayesian networks, we answer mostly in the negative. We show that when transforming SPNs to a causal graph interventional reasoning reduces to computing marginal distributions; in other words, only trivial causal reasoning is possible. For PSDDs the situation is only slightly better. We first provide an algorithm for constructing a causal graph from a PSDD, which introduces augmented variables. Intervening on the original variables, once again, reduces to marginal distributions, but when intervening on the augmented variables, a deterministic but nonetheless causal-semantics can be provided for PSDDs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/20/2021

Interventional Sum-Product Networks: Causal Inference with Tractable Probabilistic Models

While probabilistic models are an important tool for studying causality,...
research
10/22/2021

On the Tractability of Neural Causal Inference

Roth (1996) proved that any form of marginal inference with probabilisti...
research
05/21/2019

Conditional Sum-Product Networks: Imposing Structure on Deep Probabilistic Architectures

Bayesian networks are a central tool in machine learning and artificial ...
research
07/14/2018

Tractable Querying and Learning in Hybrid Domains via Sum-Product Networks

Probabilistic representations, such as Bayesian and Markov networks, are...
research
05/16/2019

Fairness in Machine Learning with Tractable Models

Machine Learning techniques have become pervasive across a range of diff...
research
01/05/2023

Reasoning about Causality in Games

Causal reasoning and game-theoretic reasoning are fundamental topics in ...
research
02/11/2021

A Compositional Atlas of Tractable Circuit Operations: From Simple Transformations to Complex Information-Theoretic Queries

Circuit representations are becoming the lingua franca to express and re...

Please sign up or login with your details

Forgot password? Click here to reset