Bayesian Causal Induction

11/03/2011 ∙ by Pedro A. Ortega, et al. ∙ Universidad de Chile 0

Discovering causal relationships is a hard task, often hindered by the need for intervention, and often requiring large amounts of data to resolve statistical uncertainty. However, humans quickly arrive at useful causal relationships. One possible reason is that humans extrapolate from past experience to new, unseen situations: that is, they encode beliefs over causal invariances, allowing for sound generalization from the observations they obtain from directly acting in the world. Here we outline a Bayesian model of causal induction where beliefs over competing causal hypotheses are modeled using probability trees. Based on this model, we illustrate why, in the general case, we need interventions plus constraints on our causal hypotheses in order to extract causal information from our experience.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

A fundamental problem of statistical causality is the problem of causal induction111For a thorough treatment of non-causal induction, refer to [8].; namely, the generalization from particular instances to abstract causal laws [5, 4]. For instance, how can you conclude that it is dangerous to ride a bike on ice from a bad slip fall on wet floor?

In this work, we are concerned with the following problem: how do we determine from experience whether “ and ” or “ and ”? That is, which of the two causal hypotheses over , , and is correct,

x1[c] x2[c] x3[c] x4[c] a1[c]or a2[c],

even in the case when both models represent identical joint distributions? Furthermore, if we collect evidence supporting the claim “

”, how do we extrapolate this to the (yet unseen) situation “”? The main challenge in this problem is that the hypothesis, say

, is a random variable that controls the very causal structure. That is, a more accurate graphical representation would be the model:

x1[c] x2[c] x3[c] x4[c] x5[c] a1[c]meta-level

which cannot be analyzed using the framework of graphical models alone because the random variable operates on a meta-level of the graphical model over and .

In this work these difficulties are overcome by using a probability tree to model the causal structure over the random events [9]. Probability trees can encode222Conditional independencies are also captured within a probability tree [9, Chapter 8.2]. alternative causal realizations, and in particular alternative causal hypotheses. All random variables are of the same type—no distinctions between meta-levels are needed. Furthermore, we define interventions [7] on probability trees so as to predict the statistical behavior after manipulation. We then show that such a formalization leads to a probabilistic method for causal induction that is intuitively appealing.

2 Causal Induction in Probability Trees

Imagine we are given a device with two light bulbs, one green () and one red (), whose states obey a hidden mechanism that correlates them positively. Moreover, the box has a switch that allows us controlling the state of the green bulb: we can either leave it undisturbed, or we can intercept the mechanism by turning the light on or off as we please (Figure 1, left device). We encode the “on” and “off” states of the green light as and respectively. Analogously, and denote the “on” and “off” states of the red light. We ponder the explanatory power of two competing hypotheses: either “green causes red” () or “red causes green” ().

x1[c] x2[c] x3[c] x4[c]

Figure 1: (Left) A device with a green and a red light bulb. A switch allows controlling the state of the green light: either “on”, “off” or “undisturbed”. (Right) A second device having a green spinner and a red spinner, both of which can either lock into a horizontal or vertical position. The two devices are connected through a cable, establishing thus a relation among their randomizing mechanisms.

2.1 Representation

Assume that the probabilities governing the realization of , and are as detailed in Figure 2a. In this tree, each (internal) node is interpreted as a causal mechanism; hence a path from the root node to one of the leaves corresponds to a particular sequential realization of causal mechanisms333Note that the set of paths is the sample space of the experiment’s probability space.. The logic underlying the structure of this tree is self-explanatory:

  1. Causal precedence: A node causally precedes its descendants. For instance, the root node corresponding to the sure event causally precedes all other nodes.

  2. Resolution of variables: Each node resolves the value of a random variable. For instance, given the node corresponding to and , either will happen with probability or with probability .

  3. Heterogeneous order: The resolution order of random variables can vary across different branches. For instance, precedes under , but precedes under . This allows modeling different causal hypotheses.

While the probability tree represents our subjective model explaining the order in which the random values are resolved, it does not necessarily correspond to the temporal order in which the events are revealed to us. So for instance, under hypothesis , the value of the variable might be revealed before , even though causally precedes ; and the hypothesis , which precedes both and , is never observed.

l1[l]a) l2[l]b) p00[c] p01[c] p02[c] p03[c] p04[c] p05[c] p06[c] p07[c] p08[c] p09[c] p10[c] p22[c] p12[c] p13[c] p14[c] p15[c] p16[c] p17[c] p18[c] p19[c] p20[c] p21[c] a00[c] a01[c] a02[c] a03[c] a04[c] a05[c] a06[c] a07[c] a08[c] a09[c] a10[c] aaa[c] a12[c] a13[c] a14[c] q00[c] q01[c] q02[c] q03[c] q04[c] q05[c] q06[c] q07[c] q08[c] q09[c] q10[c] q22[c] q12[c] q13[c] q14[c] q15[c] q16[c] q17[c] q18[c] q19[c] q20[c] q21[c] b00[c] b01[c] b02[c] b03[c] b04[c] b05[c] b06[c] b07[c] b08[c] b09[c] b10[c] baa[c] b12[c] b13[c] b14[c]

Figure 2: a) The probability tree representing the statistics of the device with two lights. The probability of a realization (written under the leaves) is calculated by multiplying the probabilities starting from the root until a leave is reached. Note that the two hypotheses are statistically indistinguishable. b) The probability tree resulting from (a) after setting .

2.2 Interventions

Suppose we observe that both lights are on. Have we learned anything about their causal dependency? A brief calculation shows that this is not the case because the posterior probabilities are equal to the prior probabilities:

This makes sense intuitively, because by just observing that the two lights are on, it is statistically impossible to tell which one caused the other. Note how the factorization of the likelihood depends on whether or . How do we extract causal information then? To answer this question, we make use of a crucial insight of statistical causality:

To obtain new causal information from statistical data, old causal information needs to be supplied (paraphrased as “no causes in, no causes out” [2] or “to find out what happens when you kick the system, you have to kick the system” [1]).

Thus, we now repeat our experiment, but this time we turn on the green light (). We reflect this choice by changing all the mechanisms that resolve the random variable , placing all the probability mass on the outcome (see Figure 2b). Assume that we subsequently observe that the second light is on. Then, the posterior probabilities are

where is Pearl’s notation to indicate a causal intervention of . Since , we have gathered evidence favoring the hypothesis “green causes red”. This was only possible because our intervention introduced a statistical asymmetry among the two hypotheses that did not exist before.

2.3 Extrapolation

Let us now connect a second device to the first one (Figure 1, right device). This device carries two spinners, a green () and a red one (). A hidden randomizing mechanism chooses their orientations (either horizontal or vertical) independently from the state of the colored lights. However, the connection and the mysterious color coding suggest that there must be a relation between the two randomizing mechanisms. Hence, we impose that the combined system either follows the law “green causes red” or “red causes green”—intentionally excluding the cases “ and ” and “ and ”.

The probability tree over the random variables and extends the probability tree from Figure 2a by appending sub-trees over and having the restriction that the nodes resolving precede the nodes resolving under hypothesis , and that the nodes resolving precede the nodes resolving in the case .

Note however, that for this new tree, the posterior probability over the hypothesis “green causes red” given that we turned on the green light and saw the red bulb lighting up is identical to the previous tree, namely . The restriction we have imposed over the possible causal hypotheses has enabled us extrapolating causal information from our experience with and to the yet unobserved variables and . This extrapolation would not have been possible if we had kept all four causal hypotheses. Hence, in the general case, causal extrapolation rests on constraints on our causal hypotheses.

3 Concluding Remarks

The problem of causal induction has been addressed relatively recently by the statistics and machine learning community, mainly under the context of graphical models

[7, 11, 10, 3, 4, 6]. This has led to the development of many algorithms that propose a suitable causal graphical model explaining the data. Many of these algorithms rely on independence assumptions, and hence naturally they proceed by exploiting the independence relations found in the data to construct a causal model.

This work outlines a general method for causal induction that is Bayesian in nature and does not rely on independence assumptions. It is based on the idea of combining probability trees [9] with interventions [7] for predicting the behavior of a manipulated system with multiple causal hypotheses. We have seen that both the interventions and the (constraints on the) causal hypotheses introduce statistical asymmetries that permit the extraction and extrapolation of causal information. Of course, this means that the amount and the forms of causal relations that we can discover are determined (a) by our constraints on the set of causal hypotheses and (b) by the interventions that we are allowed to apply to the system (and essentially, to our hypotheses). In a sense, one could say that we are “imprinting our own causal laws onto our experience”. However, this raises more fundamental questions that we have not addressed here: where do these constraints on our causal hypotheses come from and what logic do they obey?

Acknowledgments

The author would like to thank Daniel A. Braun, Philip Dawid, David Balduzzi, Samory Kpotufe, Theofanis Karaletsos, Eleni Sgouritsa and Marcus Hutter for discussions of previous versions of this manuscript. This study was supported by the Emmy Noether Grant BR 4164/1-1, “Computational and Biological Principles of Sensorimotor Learning”.

References

  • [1] G.E.P. Box. Use and abuse of regression. Technometrics, 8(4):625–629, 1966.
  • [2] N. Cartwright. Nature’s Capacities and Their Measurements. Oxford University Press, 1994.
  • [3] A. P. Dawid. Fundamentals of statistical causality. Research Report 279, Department of Statistical Science, University College London, 2007.
  • [4] T.L. Griffiths and J.B. Tenenbaum. From mere coincidences to meaningful discoveries. Cognition, 103(2):180–226, 2007.
  • [5] D. Hume. A Treatise on Human Nature. 1739-1740.
  • [6] J.M. Mooij, O. Stegle, D. Janzing, K. Zhang, and B. Schölkopf. Probabilistic latent variable models for distinguishing between cause and effect. In Advances in Neural Information Processing Systems, 2010.
  • [7] J. Pearl. Causality: Models, Reasoning, and Inference. Cambridge University Press, Cambridge, UK, 2009.
  • [8] S. Rathmanner and M. Hutter. A philosophical treatise of universal induction. Entropy, 13(6):1076–1136, 2011.
  • [9] G. Shafer. The Art of Causal Conjecture. MIT Press, 1996.
  • [10] R. Silva. Automatic Discovery of Latent Variable Models. PhD thesis, Machine Learning Department, Carnegie Mellon University, 2005.
  • [11] P. Spirtes and R. Scheines. Causation, Prediction, and Search, Second Edition. MIT Press, 2001.

References

  • [1] G.E.P. Box. Use and abuse of regression. Technometrics, 8(4):625–629, 1966.
  • [2] N. Cartwright. Nature’s Capacities and Their Measurements. Oxford University Press, 1994.
  • [3] A. P. Dawid. Fundamentals of statistical causality. Research Report 279, Department of Statistical Science, University College London, 2007.
  • [4] T.L. Griffiths and J.B. Tenenbaum. From mere coincidences to meaningful discoveries. Cognition, 103(2):180–226, 2007.
  • [5] D. Hume. A Treatise on Human Nature. 1739-1740.
  • [6] J.M. Mooij, O. Stegle, D. Janzing, K. Zhang, and B. Schölkopf. Probabilistic latent variable models for distinguishing between cause and effect. In Advances in Neural Information Processing Systems, 2010.
  • [7] J. Pearl. Causality: Models, Reasoning, and Inference. Cambridge University Press, Cambridge, UK, 2009.
  • [8] S. Rathmanner and M. Hutter. A philosophical treatise of universal induction. Entropy, 13(6):1076–1136, 2011.
  • [9] G. Shafer. The Art of Causal Conjecture. MIT Press, 1996.
  • [10] R. Silva. Automatic Discovery of Latent Variable Models. PhD thesis, Machine Learning Department, Carnegie Mellon University, 2005.
  • [11] P. Spirtes and R. Scheines. Causation, Prediction, and Search, Second Edition. MIT Press, 2001.