A growing body of literature is concerned with notions of accountability in security protocols, as in many scenarios, e.g., electronic voting, certified e-mail, online transactions, or when personal data is processed within a company, not all agents can be trusted to behave according to some established protocol . Thus, in order to specify accountability, i.e., the ability of a protocol to detect misbehaviour, but also to reason about test coverage [Chockler:2008:CSS:1352582.1352588], or explain attacks [Beer2009], the information security domain needs a reliable notion of what it means for a protocol event to cause a security violation in a given scenario. The security domain has proposed causal notions specifically for network traces, which, in contrast to traditional notions of causality, capture actions sufficient to cause an event and put a focus on control flow . In this work, we investigate these ideas in a more general setting; generalising, improving and validating them to provide a sound basis for causal reasoning in protocols and beyond.
The problem we investigate is called actual causation, as opposed to type causation, which aims at deriving general statements, e.g., “smoking causes cancer” not linked to a specific scenario. Starting from Lewis’ ‘closest-world concept’ , philosophers have largely accepted counter-factual reasoning, i.e., investigating causal claims by regarding hypothetical scenarios of the form ‘had A not occurred, B would not have occurred’, as a means to determine actual causation. Pearl’s causality framework  provides a basis for such reasoning.
So far, control flow has largely been ignored in causal reasoning. This is not very surprising, considering that the causation literature typically treats real-life examples inspired from criminal law. Control flow is simply not a well-defined notion there. Albeit, precisely those scenarios where the order of events is relevant, e.g., an event might prevent another event from happening, turn out to be notoriously difficult to explain using counterfactual reasoning. Once we consider each potential course of events as a control-flow path consisting of events that may enable or prevent each other, they become easy to handle.
To accommodate preemption, Pearl and Halpern’s very influential notion of causation has been modified several times , complemented with secondary notions  and ad-hoc modifications have been proposed [15, p. 26]. Neither of these solutions provides a satisfying answer as to how these examples should be handled in general.
In this work, we provide an account of the relation between control flow and causation. This gives us the means to adequately capture preemption in cases where we can speak of control flow. We propose that control flow variables should be modelled explicitly, as they can capture the course of events that lead to a certain outcome. Once they are made explicit, we can capture these difficult examples and provide a notion of actual causation that is simple and intuitive, captures joint as well as independent causes, gets by without secondary notions of normality and defaults and readily applies to Pearl’s causality framework. Our contributions are the following:
We explain how control flow should be incorporated in causal models and propose a formalism that makes control flow explicit.
We relate control flow to structural contingencies introduced by Halpern and Pearl , providing evidence that what this notion achieves is very similar to a simple fixing of control flow variables, and that it provides unintuitive results when applied outside control flow.
We write for a sequence if is clear from the context and use to denote concatenation. We filter a sequence by a set , denoted , by removing each element that is not in .
Causality framework (Review)
We review the causality framework introduced by Pearl , also known as the structural equations model
. The causality framework models how random variables influence each other. The set of random variables, which we assume discrete, is partitioned into a setof exogenous variables, variables that are outside the model, e.g., in the case of a security protocol, the scheduling and the attack the adversary decides to mount, and a set of endogenous variables, which are ultimately determined by the value of the exogenous variables. A signature is a triple consisting of , and function associating a range, i.e., a set, to each variable . A causal model on this signature defines the relation between endogenous variables and exogenous variables or other endogenous variables in terms of a set of equations.
Definition 1 (Causal model).
A causal model over a signature is a pair of said signature and a set of functions such that, for each ,
Each causal model induces a causal network, a graph with a node for each variable in , and an edge from to iff depends on . ( depends on iff there is a setting for the variables in such that modifying changes the value of .) If the causal graph associated to a causal model is acyclic, then each setting of the variables in provides a unique solution to the equations in
. Throughout this paper, we only consider causal models that have this property. We call a vector setting the variables ina context, and a pair of a causal model and a context a situation. All modern definitions of causality follow a counterfactual approach, which requires answering ‘what if’ questions. It is not always possible to do this by observing actual outcomes. Consider the following example.
Example 1 (Wet ground).
The ground in front of Charlie’s house is slippery when wet. Not only does it become wet when it rains; if the neighbour’s sprinkler is turned on, the ground gets wet, too. The neighbour turns on the sprinkler unless it rains. Let be 1 if it rains, and 0 otherwise, and consider the following equations for a causal model on and endogenous variables , and with range .
|(The sprinkler is on if it does not rain.)|
|(The sprinkler or the rain wets the ground.)|
|(Charlie falls when the ground is slippery.)|
Clearly, the ground being wet is a cause to Charlie’s falling, but we cannot argue counterfactually, because the counterfactual case never actually occurs: the ground is always wet. We need to intervene on the causal model.
Definition 2 (Modified causal model).
Given a causal model , we define the modified causal model over the signature by replacing each endogenous variable in with the corresponding , obtaining . Then, .
We can now define how to evaluate queries on causal models w.r.t. interventions on a vector of variables, which allows us to answer ‘what if’ questions.
Definition 3 (Causal formula).
A causal formula has the form (abbreviated ), where
is a boolean combination of primitive events, i.e., formulas of the form for , ,
We write ( is true in a causal model given context ) if the (unique) solution to the equations in in the context satisfies .
We define a causality notion based on sufficiency. In order to allow for comparison with existing notions of causation, we chose to formulate this notion in Pearl’s causation framework, as opposed to formalisms that already incorporate temporality, e.g., Kripke structures, but would obscure this comparison. This simplistic notion of causality, which we will later extend to models with explicit control flow, captures the causal variables that, by themselves, guarantee the outcome.
Definition 4 (Sufficient cause).
is a sufficient cause of in if the following three conditions hold.
For all , .
is minimal: No strict subset of satisfies SF1 and SF2.
We say is a sufficient cause for if this is the case for some . Any non-empty subset of is part of the sufficient cause .
Sufficient causes are well-suited for establishing joint causation, i.e., several factors that independently would not cause an injury, but do so in combination. Consider the following scenario:
Example 2 (Forest fire, conjunctive).
Person drops a canister full of gasoline in the forest, which soaks a tree. An hour later, smokes a cigarette next to that tree. A and B have joint responsibility for the resulting forest fire (). The above definition yields as the sufficient cause.
In contrast, the traditional but-for test (or condicio sine qua non), formulates a necessary condition. and on their own are necessary causes, despite the fact that the fire was jointly caused. Sufficient causation can distinguish joint causation, like in this case, from independent causation, like in the case where the forest was dry and both and dropped cigarettes independently.
Remark: Similar to how the set of necessary causes for any contains the singleton cause , is also a part of each sufficient cause for . Hence, it can be filtered out for most purposes.
Modelling control flow
In this section, we discuss how control flow should be incorporated into causal models. Consider the famous late preemption example, where Suzy and Billy both throw stones at a bottle. Suzy’s stone hits the bottle first and shatters it, while Billy would have hit, had the bottle still been there [12, Example 3.2]. We will discuss several models of this example.
Example 3 (Control flow in equations).
Exogenous variables and are 1 if Suzy, respectively Billy, throws. The endogenous variable is 1 (bottle shatters) if .
Pearl and Halpern propose a slightly modified model of this situation which captures the relationship between the bottle being shattered and the bottle being hittable by Billy explicitly . However, the relationship between the two is fixed in the model. Datta, Garg, Kaynar and Sharma observe this and therefore introduce exogenous variables to determine which stone reaches the bottle first depending on the context .
Example 4 (Control flow in context).
Exogenous variables and are 1 if Suzy, respectively Billy throws, and is 1 if Suzy’s throw reaches the bottle first. Then is if , and otherwise .
Pearl and Halpern’s solution can be transferred to the case where the order is not fixed a priori by explicitly representing the temporal order in distinguished variables.
Example 5 (Control flow in variables).
Exogenous variables to with range model whether Suzy, Billy or no-one throws a stone at point . Endogenous variable is 1 if and . if for any .
models whether the bottle is available for hitting at point , similar to the concept of control flow variables in programming. In programming, control flow is the order in which statements are evaluated. Interpreting the above model as a program, controls whether is assigned 1, similar to nested if-statements surrounding an assignment .
Control flow should be modelled within variables.
Among these three solutions (Examples 3, 4 and 5), only the second and third solutions are able to capture preemption, as preemption is about the temporal relation between events, and control flow captures just that. Without examining the equations themselves, it is not possible to distinguish this order, as Example 3 demonstrates. The second solution is not always sufficient, since control flow is often determined by data flow, e.g., when branching on a variable. Hence, preemption can be accounted for by modelling control flow and data flow separately, at least in scenarios where these concepts are meaningful, e.g., in programs. Even for distributed systems, once the scheduler is made explicit, the system can be adequately modelled as a program and hence control flow can be distinguished. In the following, we will demonstrate this point on a number of other examples (which admittedly have little to do with programs, but we want to stay close to the literature. They can be recast easily, imagine, e.g., Suzy’s and Billy’s stone being illegal instructions in a message queue). Note that there might be ways of modelling preemption which do not fall in any of these three classes, however, all examples we are aware of follow one of these three paradigms. We formalise our assumptions on control flow as follows.
Definition 5 (Causal model with control flow).
We say an acyclic causal model over a signature has control flow variables if
can be partitioned into and a set of data variables ,
for any , and,
for the subgraph of ’s causal network containing all nodes in and all edges between them, and for all contexts and data variable assignments , if all parents of a node are set to , this node is also set to , i.e., if for each parent , then .
The active variables in , i.e., those equal to , represent the current control flow. Data flow variables are assigned depending on which nodes in are active.
For instance: if we consider structured control flow, is connected and thus (because is acyclic) a tree. Furthermore, the set always comprises a path. The relationship to non-concurrent imperative programming languages becomes clearer, if we express each function for a data variable as
where is the current position, i.e., the deepest element of the path , the parent of , and depend only on variables in . Essentially, at each control-flow position, may assign a value to , or otherwise the parent assignment remains in effect.
Intervention on control flow variables.
For most (but not all) purposes, we are not interested in control flow variables as parts of sufficient causes. We could filter them out, i.e., if is a sufficient cause, report . Instead, we chose to achieve this by restricting intervention to a subset of variables . Both approaches model different expectations on the system. We want to assume the control flow to be (locally) determined and not consider failure in, e.g., conditional branching. The same assumption is made by Datta, Garg, Kaynar, Sharma and Sinha (DGKSS)  and Beckers .
Upon close inspection, this is similar to what DGKSS’s definition of causal traces achieves. The process calculus they propose implicitly restricts intervention to the data flow. On the other hand, it does not support branching, so it is difficult to assess the implications within this calculus. This models different expectations on the system: do we want to assume the control flow to be (locally) determined, or are we considering failure in, e.g., conditional branching. If we never assume failure, we can gather fewer causes, but all of them correspond to a sufficient cause without this assumption.
Altering condition SF2 in Definition 4, we can model restricted intervention as
For all , , with .
All (minimal) sufficient causes with restriction subsume a (minimal) sufficient cause without restriction.
SF2’ is equivalent to
for some minimal set of control flow nodes . Given that is minimal w.r.t. SF2’, and hence , this sufficient cause is also minimal w.r.t. SF2, since for any subset , s.t. ,
Since, by minimality of , this contradicts the minimality of w.r.t. to SF2’. ∎
The converse does not hold, consider, e.g., Halpern’s extension of the conjunctive forest fire example [12, Example 3.6], where three different mechanism can produce the fire, depending on whether , or is the case. In this case, the possibility that all of these mechanisms fail, even if , leads to a third cause containing both and (see Table 1, Forest fire, disjunctive, ext.).
In summary, forbidding intervention on control flow variables may lose some granularity in the distinction of causes, but is often appropriate, when the control flow is assumed to be infallible. Notice however, that none of these notions captures the fact that what ever mechanism produces the fire in case or was not related to its actual coming about. This will be the subject of the next two sections.
What is control flow outside computer programs?
For a computer program written in an imperative language, control flow is widely understood to be the order in which statements are executed, e.g., a sequence of line numbers. This definition can be easily extended to distributed systems, however, many examples in the causality literature discuss human agents in physical interaction. Our main objective is a reliable notion of causality for distributed systems, but we also want it to be grounded in the existing body of work on causality. To this end, we make the modelling principles we adhered to explicit. We neither claim that these modelling principles are universal, nor that control-flow is a notion that can be defined in all scenarios where causality applies. They applied, however, to the 34 examples we found in the literature, as we will see.
We assume the modeller has an intuition of how a variable assignment translates to an (intuitive) ‘course of events’, and when a ‘course of events’ should be considered equivalent to another. As discussed in the previous paragraph, we restrict intervention to data flow variables. Hence, we consider only variable assignments , that result from an intervention on the data flow variables, but not control flow variables, i.e., there is a context and data variable assignment s.t. . For brevity, we call these assignments valid.
Each control flow variable should correspond to a relevant event, and vice versa.
For every valid assignment, should be if the corresponding event occurred.
For every valid assignment, should be if the corresponding event did not, or did not yet, occur.
A control flow variable should only be a parent of another control flow variable (in ) iff the occurrence of the event corresponding to the child depends on the occurrence of the event corresponding to the parent.
Example 6 (Early preemption).
Victoria’s coffee is poisoned by her bodyguard (), but before the poison takes effect, she is shot by an assassin (). She dies (.
Following modelling principles 1 and 2, we introduce at least the control flow variables , and , which, if set to , represent the events ‘Victoria is poisoned’, ‘Victoria is shot’ and ‘the poison takes effect’, respectively. The poison can only take effect if Victoria was not shot, but, modelling principle 3 dictates that only means that she was not yet shot, i.e., it might just be that not enough time has passed, but she will eventually get shot before the poison takes effect. We thus need to introduce a fourth control-event, ‘Victoria was not shot during the time the poison needs to take effect’, represented by . While and , and thus all valid assignments result in one being the negation of the other, we will later consider the coming about of a course of events, which includes counterfactuals scenarios where a course of events can be incomplete. Thus, it is possible that, in a counterfactual scenario, , which intuitively means that not enough time has passed for the poison to take effect.
Following modelling principle 4, is a child of both and . The poison takes only effect () if it was administered () and some time has passed without Victoria getting shot (). Following the same principle, is not a child of , as could mean that Victoria is not getting shot at all, but also that she is not yet getting shot. Hence, would be incorrect, as the poison may or may not take effect due to the shot occurring later. Note that the control-flow graph is not linear in this case, representing the independence of poisoning and the shooting.
Preserving control flow
We present further examples that are problematic for existing definitions of cause in literature, followed by a new definition of cause (using control flow) that handles all these examples and many others. Consider the following example known as ‘bogus prevention’ [17, 19].
Example 7 (Bogus prevention, branching).
An assassin has a change of heart and refrains from putting poison into the victim’s coffee (). Later, the bodyguard puts antidote into the coffee (). Is putting the antidote into the coffee the reason the victim survives ()? Let , and the following equations describe a causal model with control flow variables .
Here, and model the control flow after a conditional checking if the poison was or was not administered. In the positive branch, is set, and only there can be reached, namely if the antidote was given and thus the poison neutralized. is set to 1 once the poison was neutralized or if it was not administered.
This example is known to be problematic for the counterfactual approach to causation. For instance, in Halpern’s modelling [12, Example 3.4], the bodyguard putting in antidote is part of a cause. Similarly, it is part of a sufficient cause, when intervention is not restricted, or, if intervention is restricted, the absence of the poison is not part of the sufficient cause anymore. Together with Hitchcock, Halpern argues that this cause could be removed by considering normality conditions . Blanchard and Schaffer criticise ‘under-constrained unclarities’ of this approach, calling theorists to ‘pay more attention to what counts as an apt causal model [..] before adding more widgets into causal models’ .111They also provide a more intuitive account of bogus prevention, but unfortunately, it only applies to Hitchcocks’s definition , which has other shortcomings [16, Example 4.4].. Putting it simply: it is often unclear what is normal. Halpern and Hitchcock [15, p. 26] provide an ad-hoc solution by adding a variable representing the chemical reaction neutralizing the poison, basically introducing the control flow variable which is true if the control flow reached a point where the poison was previously administered () and the bodyguard pours antidote into the coffee.
Intuitively, the antidote should not be considered a cause of the victim’s surviving because it was irrelevant within the actual course of events. We can capture this through the actual value of the control flow variables. DGKSS  and Beckers  require interventions to be consistent with the actual temporal order in which events occurred. Translated to our setting, this is akin to considering s.t. for all , where is the actual control flow , and is a sufficiently long sequence consisting only of . But often, the coming about of the actual course of action is as important as itself.
Example 8 (Agreement).
and vote. If , an agreement is reached (signified by with ), and the outcome is announced ( if and otherwise ). and agree on in actuality.
If the control flow is fixed, then by itself is a sufficient cause of , despite the fact that and need to agree to produce any outcome, as needs to set to in order to preserve . This illustrates that the actual control flow should not be presumed a priori to the cause. On the other hand, if is established early on (and monotone in time, e.g., violations of safety properties like weak secrecy and authentication ), there is no need to find causes for the control flow after occurred.
Thus we propose the following definition of sufficient cause, which forbids deviation from the actual control flow and is specific to causal models with control flow variables.
Definition 6 (Control flow preserving sufficient cause).
For a causal model with control flow variables , let be the actual control flow in context , i.e., the set of control nodes set to . Then, a control flow preserving sufficient cause (CFPSC) is defined like a sufficient cause (see Definition 4), but with SF2 modified as follows:
This definition handles Example 8 correctly: is the only CFPSC, capturing the fact that the agreement needs to be reached in order for to be announced. by itself is not a CFPSC, witnessed by setting to which results in .
For Bogus prevention (Example 7), which was the motivation for Halpern and Hitchcock’s introduction of normality conditions and could previously – in its original formulation  – only be treated by means of normality conditions, our approach provides a direct treatment. As all control flow nodes except are fixed to zero, is not causally relevant. Intuitively, the point at which matters because the poison was administered () is not available for any counter factual. The actual control flow , where the poison is not administered, guarantees the victim’s surviving (), but needs to be established by not administering the poison (). Hence is (the only) CFPSC, given that for .
Late preemption (Example 5 with control flow variables , ) is also handled correctly; here is the only CFPSC, i.e., Suzy’s throw caused the bottle to shatter, but not Billie’s. Because , is set to and Billie’s throw has no bearing on the outcome.
Early preemption, where the victim is poisoned, but shot before the poison takes effect (Example 6, can be captured similarly, e.g., considering the model described by Hitchcock [19, p. 526], or our own adaptation (cf. Table 1). Intuitively, captures the control flow where the victim is shot but the poison has not yet taken effect. Setting its complement to correctly disregards the control flow representing the poison taking effect due to the victim not being shot.
We first discuss related work on sufficient causation, which is the basis for CFPSC, then related work concerning control flow and finally link our insights on control flow to the notion of structural contingencies in the literature.
Going back to Lewis , most definitions of actual causation investigate claims of the form ‘had A not occurred, B would not have occurred’. The notions put forward by Pearl and Halpern [12, 16] follow this idea, which arguably captures a form of necessary causation. We elaborate this point at the end of this section.
By contrast, Datta, Garg, Kaynar, Sharma, and Sinha aim at capturing minimal sequence of protocol actions sufficient to provoke a violation, in order to provide a tool for forensics as well as a building block for accountability . The appeal of sufficient causation is that there is a clear interpretation of what it means for to be part of a sufficient cause : is (jointly with ) causing the event. Notions of necessary causation typically lack this kind of interpretation and require secondary notions like blame to determine joint responsibility. Furthermore, in particular in the context of distributed systems and program analysis, it comes in handy for debugging and forensics that each sufficient cause basically captures a chain of events which, on its own, leads to an outcome .
However, necessary causes are more succinct and seem to capture what is meant by causes in natural language better. We suspect the latter is because natural language often uses “cause” to mean “part of a cause”, but weighing these two notions against each other is not the scope of this work, and more ever, depends on the application one has in mind. We are interested in the coming about of events, so we focus on sufficient causation.
DGKSS’s notion of causation  is based on a source code transformation of non-branching programs within a simple process calculus. We therefore cast their idea of intervening on events that do not appear in the candidate cause within Pearl’s framework (see Definition 4). This methodology allows us to understand and generalize effects implicit in the definition of the calculus and translate them back. Sufficient causes are different from DGKSS’s cause traces in that the latter yield entire traces and that intervention is performed on code instead of variables. DGKSS’s calculus fixes the control flow to the actual control flow, since their cause traces contain the line number of the statement effectuating each event. Every intervention on these cause traces (which roughly corresponds to the in SF2) needs to contain these line numbers in the same order and is disregarded otherwise. Example 8 was the motivation to depart from this and capture the coming about of the actual control flow.
Besides DGKSS’s work, there is only little work on sufficient causation formalizing which events are jointly sufficient to cause an outcome. Going back to early attempts of formulating actual causation in a purely logical framework , Halpern  formulates the NESS test  within Pearl’s causality framework. As with the NESS test, this notion only allows for singular causal judgements ([11, Theorem 5.3]) and is thus not suited for capturing joint causation. Halpern  also proposed a notion of sufficient causes which requires a sufficient cause to be a) part of all actual causes (hence inducing three variants of the definition, for each notion of actual causation put forward) and b) to ensure for all contexts. If contingencies are restricted to control flow variables and causes to data flow variables, the first condition is very similar to CFPSC. By way of the second condition, one avoids that appears in all sufficient causes (as it is a trivial actual cause of itself), however, this condition prohibits capturing joint causation in cases where rare external circumstances (e.g., strong wind making lighting the cigarette in Example 2 impossible) could have prevented the outcome altogether, although they actually did not. For distributed systems, this is almost always the case due to possible loss of messages in transition, hence we consider this criterion too strong for these purposes.
Causation and control flow
Instead of control flow, Beckers extends structured equations with time [3, Part II]. His proposal for actual causation involves a notion very similar to the NESS criterion. The notion derived from it does not capture joint causes, but captures many preemption examples.
Sharma’s thesis extends DGKSS’s model with some control flow: the operator can capture choices the parties make, e.g., lets the party set to either 0 or 1 . As agents can still not branch, the previous discussion on DGKSS’s paper applies.
Besides Beckers and DGKSS, there are other formalisms for causal models that include control flow, but do not aim at capturing actual causality [8, 26]. We purposefully formulated control flow within Pearl’s causality framework, to compare with existing definitions and provide insight into how control flow can guide the modelling task and improve definitions.
Relation to structural contingencies in Halpern 2015
We review Halpern’s modification  of Halpern and Pearl’s definition of actual causes . First, because it is well-known, second, because it employs a secondary notion called structural contingencies, which appears to be related to control flow. We a) give evidence that contingencies relate to control flow wherever they are successful, b) show that they are problematic if data flow is involved, and c) provide an interpretation of structural contingencies in terms of control flow, supporting the argument that preemption is first a modelling problem, which should be covered by distinguishing control flow from data flow, and then a matter of the definition of a cause.
Definition 7 (Review: actual cause).
is an actual cause of in if the following three conditions hold.
Just like SF1 and SF3.
There are , and such that , and .
AC2 is a generalisation of the intuition behind Lewis’ counterfactual. The special case corresponds to Lewis’ reasoning that is necessary for to hold, because there exists a counterfactual setting that negates . AC2 weakens this condition in order to capture causes that are ‘masked’ by other events. The set of variables is called structural contingency, as it captures the aspects of the actual situation under which is a cause. Variables in can only be fixed to their actual values.
We observe that in all examples in Halpern’s paper where non-empty contingencies appear (Suzy-Billy, Ex. 3.6 and Ex. 3.7 after adding variables), they exclusively contain variables added to the model to “describ[e] the mechanism that brings about the result”. These are the precisely variables we would consider control flow variables. The only exceptions are Example 3.9b and 3.10, where Halpern points out that Definition 7 gives unintuitive results. Definition 6, captures Halpern’s intuition correctly (see Table 1, Train  and Careful Poisoning).
While structural contingencies work well if they contain control flow variables, they can give unintuitive results if they contain data flow variables. Consider the following causal model inspired by the ‘Battle of Sexes’ two-player game.
Example 9 (Bach or Stravinsky).
A couple () agreed on meeting this evening, but they cannot recall whether they wanted to attend a Bach or a Stravinsky concert (). They are taking the train , but only if they both go to the same concert ( for the control flow variable ). Contrary to the awkward situation in the famous two-player game, they have left a note in their calendar, which helps them remember ( and ). Is the fact that the note reminds them to attend the Bach concert () a cause for taking the train together ()?
Definition 7, as well as the definition preceding it , lead to an unintuitive result because variables that concern data flow are considered as contingencies. No matter which value is chosen for , is always 1. However, can be set to or , fixing it to the actual value of . Hence, for, e.g., , it holds that , and thus the choice of the input is considered a cause for , although is output no matter what the input is. It appears that and should not be permissible contingencies. Definition 6 handles this example correctly: is the only CFPSC, as the control flow is preserved no matter what value has, as long as and agree on doing what it says (see Table 1, BoS). Our conclusion is that contingencies should be restricted to control flow variables to avoid such spurious causes.
Thus, if the use case allows for making control flow variables explicit, we can restrict structural contingencies to control flow variables and actual causes to data flow variables, and consider AC2’ as follows:
For , and a subset of the actual control flow, there is such that .
Under these circumstances, we can relate actual causes and CFPSC by comparing AC2’ to CFS2. A priori, both are very different: AC2 (and AC2’) formulate a necessity criterion on ,222If is an actual cause under contingency , then is a necessary cause, i.e., . while CFS2 formulates a sufficiency criterion. Interestingly, there is a duality between necessary and sufficient causes , which we can use to compare the two w.r.t. their treatment of control flow: If the set of variables is finite, the set of all sufficient causes can be obtained from the set of all necessary causes by, first, considering them as a boolean formula in in disjunctive normal form (DNF) ( is in iff either equals , or if it equals , etc.), second, computing the conjunctive normal form of this formula and, finally, switching and . For example, is the only sufficient causes in the conjunctive forest fire example, and thus and are both necessary causes. We adapted this result to CFPSCs (see the Appendix for theorem and proof) and obtain a dual definition of control flow preserving necessary causes, where CFS2 translates to
For and for the actual control flow, there is such that
We can now compare AC2’ to CFN2 to get a clear picture of how actual causation handles control flow, if we incorporate the distinction of control flow variables and data flow variables as discussed above. AC2’ is much more liberal in how the counter-factual control flow can be related to the actual control flow. By choosing an arbitrary subset of all control flow variables, not only those set to or , each counterfactual setting of a control flow variable may enforce the actual control flow (if it is fixed to ), prevent counterfactual flow that contradict the actual course of events (if it is fixed to ), but may also just be computed based on the equations (if it is not part of the subset). CFN2 is more rigid in this respect, strictly prohibiting the counter-factual control flow to deviate from the actual control flow, but leaving it otherwise free (motivated by Example 8). This explains, e.g., the difference in Weslake’s Careful Poisoning example (see Table 1), where the assassin only adds poison to the coffee if he is sure the antidote was added previously. The antidote is (wrongly to most) considered an actual cause for the victim surviving, as the counterfactual where it is not administered can still consider the control flow where the assassin added the poison.
We summarize: a restriction of structural contingencies to control flow seems to avoid unintuitive results in some cases, without losing accuracy on any examples we considered. Given this restriction, contingencies obtain an interpretation in terms of the relation between the actual control flow and the control flow considered in counterfactuals. In comparison to CFPS2, this relation is much more loose. The Careful Poisoning example suggests that it is too loose.
Our goal was to provide an account of the relation between control flow and causation; Definition 6 served this goal by demonstrating that an explicit treatment of control flow can help (and is sometimes even necessary) to treat cases where counterfactuals can change the course of events, e.g., in cases of preemption. We further validate our definition against a benchmark suite of 34 examples from the literature. To avoid cherry-picking, we include all examples from  and . The source code is available at https://github.com/rkunnema/causation-benchmark. We encourage other researchers to use this suite to test their own definitions of causality and to extend it with new examples.
We compare Definition 6 with Definition 4 for illustration, and with Halpern’s notion of actual causation (see Definition 7) as a point of reference. We omit notions like defaults and normality, as we want to stress that these are not necessary to deal with examples of preemption, and favour Halpern’s notion over prior variants , as it distinguishes between joint causation and over-determination.999The Litmus test are the two variants of the Forest fire example.
We include the complete set of causes as a comma-separated list of sequences, as, e.g., for Careful Poisoning (ctl) it is important to see that (the antidote being administered) is not a CFPSC. Many examples in the literature do not capture control flow in their modelling, in which case we present results for the original modelling and a modelling according to Definition 5. In all examples we get results that are satisfying according to the discussion in the literature we cite.101010For Switch and Fancy Lamp, our Definition 6 captures that , respectively, are not necessary for the outcome by giving a sufficient cause which does not include them. For lack of space we refer to the cited literature for deeper explanation. The only examples we added (not counting adaptations to Definition 5) are Examples 8 and 9, which we explained already, and Vote 5:2, which contrasts the differing views sufficient causes and actual/necessary causes have to offer.
Conclusion and future work
In cases where control flow can be made explicit, it is worth doing so, as this helps to deal with problematic cases like preemption. We discussed in what way it should be taken into account, and proposed a blue-print for doing so, as well as a definition of causality that preserves the actual course of events. This definition is simple and intuitive, does not require secondary notions, captures joint causation and handles all 34 example in our benchmark correctly (with respect to the respective author’s views). Such a definition is useful for computer programs and distributed systems, as the temporal order of events between communicating agents can be captured by a non-deterministic scheduler simulating them. A translation from Petri nets, Kripke structures or process calculi to extended causal models that adheres to the modelling principles discussed in this work can be used to argue the soundness of causality notions formulated within these formalisms. In fact, we advocate this approach, as we believe that a thorough discussion of causality requires a common language.
Vice versa, the causality literature stands to benefit from such translations. Due to the generality of Pearl’s framework, modelling is more art than science, which raises concerns about the falsifiability of theories of causation. For bogus prevention and other difficult examples, e.g., late preemption, the ‘correct’ modelling has been debated again and again for each use case specifically. By transferring and analysing existing domain-specific definitions, e.g., DGKSS’s definition, to Pearl’s framework, we can find a common ground for the discussion of the ‘right’ modelling, and encourage a similar treatment for other domains where causality is of interest. This way, experiences and observation in well-understood domains can be fed back to the general case and modelling principles be exposed that are otherwise easily overlooked when modelling abstract scenarios ad hoc.
-  Bowen Alpern & Fred B Schneider (1985): Defining liveness. Information processing letters 21(4), pp. 181–185, doi:10.1016/0020-0190(85)90056-0.
-  Sander Beckers (2016): Actual Causation: Definitions and Principles. Ph.D. thesis, Faculty of Engineering Science, Katholieke Universiteit Leuven. Available at https://lirias.kuleuven.be/handle/123456789/545701.
-  Giampaolo Bella & Lawrence C. Paulson (2006): Accountability Protocols: Formalized and Verified. ACM Trans. Inf. Syst. Secur. 9(2), pp. 138–161, doi:10.1145/1151414.1151416.
-  Thomas Blanchard & Jonathan Schaffer (2014): Cause without Default. In: Making a Difference, Oxford University Press, pp. 175–214, doi:10.1093/oso/9780198746911.003.0010.
-  Anupam Datta, Deepak Garg, Dilsun Kaynar & Divya Sharma (2016): Tracing Actual Causes (CMU-CyLab-16-004). Technical Report, Carnegie Mellon University.
-  Anupam Datta, Deepak Garg, Dilsun Kaynar, Divya Sharma & Arunesh Sinha (2015): Program actions as actual causes: A building block for accountability. In: 2015 IEEE 28th Computer Security Foundations Symposium, IEEE, pp. 261–275, doi:10.1109/CSF.2015.25.
-  Enrico Giunchiglia, Joohyung Lee, Vladimir Lifschitz, Norman McCain & Hudson Turner (2004): Nonmonotonic causal theories. Artificial Intelligence 153(1-2), pp. 49–104, doi:10.1016/j.artint.2002.12.001.
-  Ned Hall (2000): Causation and the Price of Transitivity. Journal of Philosophy 97(4), p. 198, doi:10.2307/2678390.
-  Ned Hall (2004): Two Concepts of Causation. In John Collins, Ned Hall & Laurie Paul, editors: Causation and Counterfactuals, MIT Press, pp. 225–276, doi:10.7551/mitpress/1752.003.0010.
-  Joseph Y. Halpern (2008): Defaults and Normality in Causal Structures. In Gerhard Brewka & Jérôme Lang, editors: Principles of Knowledge Representation and Reasoning: Proceedings of the Eleventh International Conference, KR 2008, Sydney, Australia, September 16-19, 2008, AAAI Press, pp. 198–208. Available at http://www.aaai.org/Library/KR/2008/kr08-020.php.
-  Joseph Y. Halpern (2015): A Modification of the Halpern-Pearl Definition of Causality. In Qiang Yang & Michael Wooldridge, editors: Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, IJCAI 2015, Buenos Aires, Argentina, July 25-31, 2015, AAAI Press, pp. 3022–3033. Available at http://ijcai.org/Abstract/15/427.
-  Joseph Y. Halpern (2016): Actual Causality. The MIT Press, doi:10.7551/mitpress/10809.001.0001.
-  Joseph Y. Halpern & Christopher Hitchcock (2011): Actual causation and the art of modeling. CoRR abs/1106.2652. Available at http://arxiv.org/abs/1106.2652.
-  Joseph Y. Halpern & Christopher Hitchcock (2013): Graded Causation and Defaults. CoRR abs/1309.1226. Available at http://arxiv.org/abs/1309.1226.
-  Joseph Y. Halpern & Judea Pearl (2013): Causes and Explanations: A Structural-Model Approach — Part 1: Causes. CoRR abs/1301.2275. Available at http://arxiv.org/abs/1301.2275.
-  Eric Hiddleston (2005): Causal powers. British Journal for the Philosophy of Science 56(1), pp. 27–59, doi:10.1093/phisci/axi102.
-  Christopher Hitchcock (2001): The Intransitivity of Causation Revealed in Equations and Graphs. Journal of Philosophy 98(6), pp. 273–299, doi:10.2307/2678432.
-  Christopher Hitchcock (2007): Prevention, Preemption, and the Principle of Sufficient Reason. Philosophical Review 116(4), pp. 495–532, doi:10.1215/00318108-2007-012.
-  Mark Hopkins & Judea Pearl (2003): Clarifying the usage of structural models for commonsense causal reasoning. In: Proceedings of the AAAI Spring Symposium on Logical Formalizations of Commonsense Reasoning, AAAI Press Menlo Park, CA, pp. 83–89.
-  Robert Künnemann (2017): Sufficient and necessary causation are dual. CoRR abs/1710.09102. Available at http://arxiv.org/abs/1710.09102.
-  David Lewis (1973): Causation. Journal of Philosophy 70(17), pp. 556–567, doi:10.2307/2025310.
-  J. L. Mackie (1965): Causes and Conditions. American Philosophical Quarterly 2(4), pp. 245–264.
-  Judea Pearl (2000): Causality: Models, Reasoning, and Inference. Cambridge University Press, New York, NY, USA.
-  Divya Sharma (2015): Interaction-aware Actual Causation: A Building Block for Accountability in Security Protocols. Ph.D. thesis, CyLab, Carnegie Mellon University.
-  Hudson Turner (1999): A logic of universal causation. Artificial Intelligence 113(1-2), pp. 87–123, doi:10.1016/S0004-3702(99)00058-2.
-  Brad Weslake (2015): A partial theory of actual causation. British Journal for the Philosophy of Science.
Richard W Wright
Causation, responsibility, risk, probability, naked statistics, and proof: Pruning the bramble bush by clarifying the concepts. Iowa L. Rev. 73, p. 1001.
Appendix A Appendix: Duality between CFS2 and CFN2
Fix some finite set and some ordering and let denote the following representation of relative to