1 Introduction
Three fundamental obstacles are standing in our way to strong AI, including robustnesss(or adaptability), explainability and lacking of understanding causeeffect connections. Pearl claims that all these obstacles can be overcome using causal modeling tools, in particular, causal diagrams and their associated logic[12]. How can machines represent causal knowledge in a way that would enable them to access the necessary information swiftly, answer questions correctly, and do it with ease, as a human can? This question, which is referred as "miniTurning test" for an AI to pass, has been Pearl’s life work[1]. Pearl’s perfect intervention is a key idea of the invented causal sematics during the causal revolution for the last three decades.
However, the perfect intervention is not perfect for at least four reasons:

Perfect interventions refer to "removing of causal mechanisms" or "a minimal change on mechanisms" which against the ituition of invariance of causal physical laws. Einstein’s theory of relativity tells us that time, length, and quality can change in different coordinate systems, but the causal relationship of events remain, which is the source power of robust predictions of causal models for the physical world. In my view we can not impose "a minimal change on mechanisms" instead you can intervening the information input of a local causal mechanism system.

An inconsistence with the potential outcome framework exists which can be very confusing, althogth equvalence of the two framework of causal diagram and potential outcome has been proved. And we will show what is the inconsitence and how it solved by our info intervention in the following parts.

Practical questions, e.g. what if I make some event(other than ) happen or what if my mom make me go to bed before 10 p.m., can not be articulated by perfect intervention [7].
In this paper, we first list some prelimaries and give a new rigorous formalization of structure causal mdoels (SCMs) and the defintion intervened SCMs by a perfect intervention. Our main goal is to offer a modified perfect intervention, which is info intervention, to avoid implications as much as possible, and we only consider the case of acyclic SCMs for this conceptional work. Second, we propose the concept of info intervention and show how can it solve or release some isssues of perfect intervenion. Lastly, we compare these two interventions and their properties, and a discussion between correlation and common causes are given.
2 Prelimnaries
Definition 2.1 (Structural Causal Model).
A structural causal model (SCM) by definition consists of:

a set of nodes , where elements of correspond to visible variables and elements of to latent variables,

an visible/latent ^{1}^{1}1Visible variable, also known as observational variable, the reason for using the term visible/latent variables is to emphsis that they are equally treated in the graph. Latent variable is not exogenous which encodes information in the environment other than structural equations. see Figure 1. space for every , ,

a directed graph structure , with a system of structural equations :
where and and is measurable.
The SCM can be summarized by the tuple . is referred as the augmented functional graph(or causal graph) while the functional graph which includes only visiable variables, denoted as [3]. For simplicity, we assume in this paper is a directed acyclic graph(DAG) since an acyclic SCM is always solvable with many elegant properties.
Every instantiation of the latent variables uniquely determines the values of all variables in [11]. The root nodes^{2}^{2}2A root node is a node without parent. A root node could have many children and each visible node could have many root node parents. in , , is equal to the set of latent variables which are independent of its each component in the defintion.
Remark 2.2 (No causal relationships among visible varaibles).
For a special case when no causal relationships among visible variables, but still we can model them through a SCM with structural equations which is a generative model. A great progress in deep generative models have been made with a unified framework which treat generation and inference in a symetric way [8].
Remark 2.3 (Visible varaibles are generated by latent variables).
In the definition of SCM, we assume , latent root assumption, which is suggested by the data generating process interpretation start from latent variables. But for some structural equations(with or without circles), e.g.
which violate the assumption in which case and are determined by the environment information, then we could assume some puesdo latent variables to generate with a generative model and make the model satifies the assumption of latent root.
Pearl summarized definition and interpretation of perfection intervention in [11, 1, 10], To model an action one performs a mininal change necessary for establishiing the antecedent , while leaving the rest of the model intact. This calls for removing the mechanism equation that nominally assigns values to variable , and replacing it with a new equation, , that enforces the intent of the specified action. Formally [3]:
Definition 2.4 (Perfect intervention).
Given an SCM, , the perfect intervention ^{3}^{3}3An perfect intervention is usually imposed on visible variables. maps to the intervened model where
3 Info intervention
We assuming every visible variable recieves informations from edges pointing to it and starts processing until all input edge informations are being collected, and then sends out an information to all it’s emitting edges. This informations view of causal systems is signalflow graph or signalflowgraph (SFG) invented by Claude Shannon, which is often called a Mason graph after Samuel Jefferson Mason who coined the term. We now present our definiton of info intervention.
Definition 3.1 (Info intervention).
Given an SCM, , the info intervention maps to the intervened model , where
For the simplest case that no causal relathionship among visible variable, see Remark 2.3. An perfect intervened SCM would be the same as the original SCM except for those intervened variables. For info intervention, the intervened SCM would exactly the same as the original SCM but the informations sending out from intervened variables change to the intervened values.
Definition 3.2 (Info intervened causal graph).
For a SCM, , the causal graph of post info intervention SCM is that graph that removs all pointing out edges from in .
Example 3.3.
For a SCM with treatment , confounders and outcome , see Figure 2. The structural equations are:
The perfect intervened SCM is:
The info intervened SCM is:
Then the original is show in Figure1(a), and the perfect intervened SCM with Figure 1(b), adn the info intervened SCM with Figure 1(c). And the functional graph of the two intervened SCM can be seen in Figure 3
4 Why info intervention instead of perfect intervention?
What makes perfect intervention not perfect while many achievements have been made that seven tools of causal inference are developed for causal inference with Pearl’s perfect intervention [12]? We will show four reasons, the first one is about belief of invariant causal mechanisms, the second reason is about a inconsistence with potential outcome framework, and the third related to complications of cyclic SCMs, and the last one concerns the practical question to articulate.
1. Invariance form of causal assginment mechanisms for postintervened SCM.
First, a perfect intervention define "a minimal change on mechanisms" for visible variables . We argue that the local causal mechanisms system represented by variables in will never ever change. Einstein’s theory of relativity tells us that time, length, and quality can change in different coordinate systems, but the causal relationship of events remain, which is the source power of robust predictions of causal models for the physical world. In constrast, we consider a info intervention intervening the informations in the edges pointing out from and keep the local mechanism .
In Example 5.2, The assignment mechanism on is for the postperfectintervened SCM which is different from for the original SCM, but remains the same for postinfointervened SCM. Generally, for intervened model by perfect intervention, the mechanisms that assgin values to visible variables on intervened variables changed to constant . In the defintion of info intervention, all causal mechanisms remain the same as while the informations sent to a visible variable change from to .
2. Consistent with the potential outcome framework.
The Rubin causal model (RCM) is an approach to the statistical analysis of cause and effect based on the framework of potential outcomes, named after Donald Rubin and potential outcome framework has been considered an equivalent framework of SCM, see [9]. The key assumption of potential outcomes framework is ignorability( or exchangeability), denoted as
for estimating the causal effect of outcome
on treatment . But is in the original SCM while is in the postperfectintervened SCM, which means these two variables are in different model and we are required to establish a conditional independence between them. In Example 5.2 we are required to establish a conditional independence on between a variable in the Figure 1(a) and in the Figure 1(b). This inconsistency of searching for conditional independence between variables from different model does exist for perfect intervention but not for info intervention.For info intervention, all descendants of will be influenced by informations sending out by its emitting edges result in the potential outcome, and those nondescendants are not change. In Example 5.2, and are both in the intervened model. And we can use the graph criteria of separation to induce the igorability.
Theorem 4.1 (Adjustment formula).
For info intervention , where , on an acyclic SCM , if on the augmented functional graph of the postintervention SCM and . Then
Proof.
Since and is a directed acyclic graph, we have by the separation property, and then
And so we have:
∎
In fact, for an acyclic SCM, the info intervention postintervened model differ from the perfect intervention postintervened model only on the set of intervened variables. It means that almost every elegant property of perfect intervention for DAG has its counterpart for info intervention. The above theory is counterpart of backdoor criteria and adjustment formula for perfect intervention. In Figure 2, if (contains no descendants of ) blocks all backdoors then and are serapated by . In conclusion, the postintervened model for info intervention contain causeeffect pairs which facilitate the direct use of graphical criteria for identifiability.
3. SCM when cycles are present.
In many application domain cycles are abundantly present, in which case one encounters various technicual complications for perfect interventions including probailistic settings, stability under interventions etc. [3, 5, 4]. The solvability problem of SCM is very tricky for that there could be many difference solutions for uniquely solvable SCM, solvability not preserved under perfect intervention[3]. In our info interpretation of SCM, if the probability of the intervened event is positive w.r.t to the a solution induced proability measure by on the latent space, then intervened model will be solvable. Further discuss of cyclic SCMs is our future work.
4. Formulazation of practical problems
"What if" kind of questions(interventional and counterfactural questions) cannot be articulated, let alone answer by systems operate in purely statistical mode [12]. But still some practical questions can’t be articulated by perfect intervention[7], e.g. what if I make myself get up before 7:00 a.m., in which case both what event and the way the event happen matter. The information will passing out through emitting edges of the info intervened variables. This means the information view of SCM are more expressive of practical questions.
5 Causal calculus for info intervention
Pearl’s 3 rules for perfect intervention enables us to identify a causal query. For our info perfect on acyclic SCM, the causal calculus shows a more Intuitive expression.
Theorem 5.1 (Causal calculus for info intervention).
For an acyclic SCM . , and are arbitratry disjoint sets of visible variables, Then for info intervention ,
Rule 1(Insertion/deletion of observations)
in the case that the variable set blocks all paths from to and all arrows emitting from have been deleted.
Rule 2(Action/observation exchange)
in the case that the variable set satisfies the backdoor criterion.
Rule 3(Insertion/deletion of actions)
in the case where no causal paths connect and .
Proof.
is the graph removing all emitting edges from of .
For rule 1, blocks all paths from to and all arrows emitting from have been deleted, so , for the definition of conditional independence:
For rule 2. the variable set satisfies the backdoor criterion, then we have , then,
For rule 3, is not a descendant of , which means it cannot accept any information on the pointing out edges of , obviously . ∎
separation is a convienient graphical rule for inducing conditional independence, now we have to point out that it only leads to a sufficient set for controlling but not a necessary set. For a simplest example,
Example 5.2.
For a SCM with treatment , confounders and outcome , see Figure 4. The structural equations are:
Obviously, , suggests that we should controll for . Actually we could have the independence between and without conditioning on for some special function and probability measure on the latent space. In particular, for any two independent nonzero proability events and in the generated information field of , let and , then .
6 Conclusion
Info intervention is proposed in an information view of causal systems. Potential outcome framework has been proved to an equivalent framework of causal diagrams. For an info intervention on acyclic SCM, comparing to perfect intervention, we put the potential outcome in the postintervened causal graph, on the other hand, comparing to potential outcome framework, we have a causal graph for the postintervened SCM which allow us induce the igorability property by simpe graphical criteria. Causal calculus for info intervention still hold and easier to understand.
Axiom 6.1 (Philosophy of info interpretation).
Our formalization of SCM and defintion of info intervention follow the philosophy:

A causal system sent some information from its root nodes,

A local causal mechsnism starts processing information received as soon as all input informations of it are recieved.

Causal mechnisms hold whatever what happened(intervening or conditioning) while the input information of mechanisms can change with additional environment information.
In summary, Pearl’s perfect intervention is not perfect for serveral reasons, we often an option of info intervention to formulate the causal sematics which owns several advantages.
References
 [1] The Book of Why: The New Science of Cause and Effect. 2018.
 [2] Martin Arjovsky, Léon Bottou, Ishaan Gulrajani, and David LopezPaz. Invariant Risk Minimization. jul 2019.
 [3] Stephan Bongers, Jonas Peters, Bernhard Schölkopf, and Joris M. Mooij. Theoretical Aspects of Cyclic Structural Causal Models. nov 2016.
 [4] Patrick Forré and Joris M Mooij. Constraintbased Causal Discovery for NonLinear Structural Causal Models with Cycles and Latent Confounders. 2018.
 [5] Patrick Forré and Joris M Mooij. Causal Calculus in the Presence of Cycles, Latent Confounders and Selection Bias. 2019.
 [6] Christina HeinzeDeml, Marloes H. Maathuis, and Nicolai Meinshausen. Causal Structure Learning. mar 2018.
 [7] Gong Heyang. Generalized intervention. unpublished work, (October), 2019.
 [8] Zhiting Hu, Zichao Yang, Ruslan Salakhutdinov, and Eric P Xing. On unifying deep generative models. arXiv preprint arXiv:1706.00550, 2017.
 [9] Guido W. Imbens and Donald B. Rubin. Causal inference: For statistics, social, and biomedical sciences an introduction. 2015.
 [10] Judea Pearl. Causality: Models, Reasoning and Inference. 2009.
 [11] Judea Pearl. Causal and Counterfactual Inference. The Handbook of Rationality, (October):1–41, 2018.

[12]
Judea Pearl.
The seven tools of causal inference, with reflections on machine learning.
Communications of the ACM, 62(3):54–60, 2019.  [13] Sanna Tyrväinen. Introduction to Causal Calculus. Technical report, 2017.
Comments
There are no comments yet.