# Counterfactual Conditionals in Quantified Modal Logic

We present a novel formalization of counterfactual conditionals in a quantified modal logic. Counterfactual conditionals play a vital role in ethical and moral reasoning. Prior work has shown that moral reasoning systems (and more generally, theory-of-mind reasoning systems) should be at least as expressive as first-order (quantified) modal logic (QML) to be well-behaved. While existing work on moral reasoning has focused on counterfactual-free QML moral reasoning, we present a fully specified and implemented formal system that includes counterfactual conditionals. We validate our model with two projects. In the first project, we demonstrate that our system can be used to model a complex moral principle, the doctrine of double effect. In the second project, we use the system to build a data-set with true and false counterfactuals as licensed by our theory, which we believe can be useful for other researchers. This project also shows that our model can be computationally feasible.

There are no comments yet.

## Authors

• 10 publications
• 9 publications
• ### Strength Factors: An Uncertainty System for a Quantified Modal Logic

We present a new system S for handling uncertainty in a quantified modal...
05/30/2017 ∙ by Naveen Sundar Govindarajulu, et al. ∙ 0

• ### A Short Remark on Analogical Reasoning

We discuss the problem of defining a logic for analogical reasoning, and...
10/06/2019 ∙ by Karl Schlechta, et al. ∙ 0

• ### Vector logic and counterfactuals

In this work we investigate the representation of counterfactual conditi...
03/09/2020 ∙ by Eduardo Mizraji, et al. ∙ 0

• ### Ceteris paribus logic in counterfactual reasoning

The semantics for counterfactuals due to David Lewis has been challenged...
06/24/2016 ∙ by Patrick Girard, et al. ∙ 0

• ### On Quantified Modal Theorem Proving for Modeling Ethics

In the last decade, formal logics have been used to model a wide range o...
12/30/2019 ∙ by Naveen Sundar Govindarajulu, et al. ∙ 23

• ### One Formalization of Virtue Ethics via Learning

Given that there exist many different formal and precise treatments of d...
05/20/2018 ∙ by Naveen Sundar Govindarajulu, et al. ∙ 0

• ### A (Simplified) Supreme Being Necessarily Exists – Says the Computer!

A simplified variant of Kurt Gödel's modal ontological argument is prese...
01/14/2020 ∙ by Christoph Benzmüller, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## Introduction

Natural-language counterfactual conditionals (or simply counterfactuals) are statements that have two parts (semantically, and sometimes syntactically): an antecedent and a consequent. Counterfactual conditionals differ from standard material conditionals in that the mere falsity of the antecedent does not lead to the conditional being true. For example, the sentence “If John had gone to the doctor, John would not be sick now” is considered a counterfactual as it is usually uttered when “John has not gone to the doctor”. Note that the surface syntactic form of such conditionals might not be an explicit conditional such as “If X then Y”; for example: “John going to the doctor would have prevented John being sick now”. Material conditionals in classical logic fail when used to model such sentences. Counterfactuals occur in certain ethical principles and are often associated with moral reasoning. We present a formal computational model for such conditionals.

The plan for the paper is as follows. We give a brief overview of how counterfactuals are used, focusing on moral reasoning. Then we briefly discuss prior art in modeling counterfactuals, including computational studies of counterfactuals. We then present our formal system, used as a foundation for building our model of counterfactual conditionals. After this, we present the model itself and prove some general properties of the system. We end by discussing two projects to demonstrate how the formal model can be used.

## Use of Counterfactual Conditionals

Counterfactual reasoning plays a vital role in human moral reasoning. For instance, the doctrine of double effect () requires counterfactual statements in its full formalization.  is an attractive target for building ethical machines, as numerous empirical studies have shown that human behavior in moral dilemmas is in accordance with what the doctrine (or modified versions of it) predict.111Moral dilemmas are situations in which all available actions have large positive and negative effects. Another reason for considering  is that many legal systems use the doctrine for defining criminality. We briefly state the doctrine below. Assume that we have an ethical hierarchy of actions as in the deontological case (e.g. forbidden, morally neutral, obligatory); see [McNamara2010].222There are also exist in the literature more fine-grained hierarchies[Bringsjord2015]. We also assume that we have a utility or goodness function for states of the world or effects, as in the consequentialist case. Given an agent , an action in a situation at time is said to be -compliant iff (the clauses are verbatim from [Govindarajulu and Bringsjord2017a]):

[linecolor=white, frametitle=  Informal, frametitlebackgroundcolor=gray!25, backgroundcolor=gray!10, roundcorner=8pt]

1. the action is not forbidden

2. the net utility or goodness of the action is greater than some positive amount ;

3. the agent performing the action intends only the good effects;

4. the agent does not intend any of the bad effects;

5. the bad effects are not used as a means to obtain the good effects; and

6. if there are bad effects, the agent would rather the situation be different and the agent not have to perform the action. That is, the action is unavoidable.

Note that while [Govindarajulu and Bringsjord2017a] present a formalization and corresponding implementation and “stopwatch” test of the first four clauses above, there is no formalization of . Work presented here will enable such a formalization. The last clause has been rewritten below to make explicit its counterfactual nature.

[linecolor=white, frametitle= Broken Up , frametitlebackgroundcolor=gray!25, backgroundcolor=gray!10, roundcorner=8pt]

1. The agent desires that the current situation be different.

2. The agent believes that if the agent itself were in a different situation, the agent would not perform the action .

Separately, [Migliore et al.2014] have an empirical study in which they elicit subjects to produce counterfactual answers to questions in a mix of situations with and without moral content. Their answers have the form of and . Their study shows with statistical significance that humans spent more time responding to situations that had moral content. This suggests the presence of non-trivial counterfactual reasoning in morally-charged situations.

Counterfactual reasoning also plays an important role in the intelligence community in counterfactual forecasting [Lehner2017]. In counterfacutal forecasting, analysts try to forecast what would have happened if the situation in the past was different than what we know, and as iarpa_2017_focus states there is a need for formal/automated tools for counterfactual reasoning.

## Prior Art

Most formal modeling of counterfactuals has been in work on subjunctive conditionals. While there are varying definitions of what a subjunctive conditional is, the hallmark of such conditionals is that the antecedent, while not necessarily contrary to established facts (as is the case in counterfactuals), does speak of what could hold even if it presently doesn’t; and then the consequent expresses that which would (at least supposedly) hold were this antecedent to obtain.333E.g., “If you were to practice every day, your serve would be reliable” is a subjunctive conditional. It might not be the the case that you’re not already practicing hard. However, “If you had practiced hard, your serve would have been reliable” is a counterfactual (because, as it’s said in the literature, the antecedent is “contrary to fact”). Hence, to ease exposition herein, we simply note that (i) subjunctives are assuredly non-truth-functional conditionals, and (ii) we can take subjunctive conditionals to be a superclass of counterfactual conditionals. A lively overview of formal systems for modeling subjunctive conditionals can be found in [Nute1984]. Roughly, prior work can be divided into cotenability theories versus possible-worlds theories. In cotenability theories, a subjunctive holds iff holds. Here is taken to be a set of laws (logical/physical/legal) cotenable with . One major issue with many theories of cotenability is that they at least have the appearance of circularly defining cotenability in terms of cotenability. In possible-worlds theories, semantics of subjunctive conditionals are defined in terms of possible worlds.444E.g., [Lewis1973] famously aligns each possible world with an order of relative similarity among worlds, and is thereby able to capture in clever fashion the idea that a counterfactual holds just in case the possible world satisfying that is the most similar to the actual world also satisfies . While as is plain we are not fans of possible-worlds semantics, those attracted to such an approach to counterfactuals would do well in our opinion to survey [Fritz and Goodman2017]. While conceptually attractive to a degree, such approaches are problematic. For example, many possible-worlds accounts are vulnerable to proofs that certain conceptions of possible worlds are provably inconsistent (e.g. see [Bringsjord1985]). For detailed argumentation against possible-world semantics for counterfactual conditionals, see [Ellis, Jackson, and Pargetter1977].

Relevance logics strive to fix issues such as explosion and non-relevance of antecedents and consequents in material conditionals; see [Mares2014] for a wide-ranging overview. The main concern in relevance logics, as the name implies, is to ensure that there is some amount of relevance between an antecedent and a consequent of a conditional, and between the assumptions in a proof and its conclusions. Our model does not reflect this concern, as common notions of relevance such as variable/expression sharing become muddled when the system includes equality, and become even more muddled when intensionality is added. Most systems of relevance logic are primarily possible-worlds-based and share some of the same concerns we have discussed above. For example, [Mares and Fuhrmann1995, Mares2004] discuss relevance logics that can handle counterfactual conditionals but are all based on possible-worlds semantics, and the formulae are only extensional in nature.555By extensional logics, we refer broadly to non-modal logics such as propositional logic, first-order logic, second-order logic etc. By intensional logics, we refer to modal logics. Note that this is different from intentions which can be represented by intensional operators, just as knowledge, belief, desires etc can be represented by intensional operators. See [Zalta1988] for an overview. Work in [Pereira and Saptawijaya2016] falls under extensional systems, and as we explain below, is not sufficient for our modeling requirements.

Differently, recent work in natural language processing by

[Son et al.2017] focuses on detecting counterfactuals in social-media updates. Due to the low base rate of counterfactual statements, they use a combined rule-based and statistical method for detecting counterfactuals. Their work is on detecting (and not evaluating, analyzing further, or reasoning over) counterfactual conditionals and other counterfactual statements.

## Needed Expressivity

Our modeling goals require a formal system of adequate expressivity to be used in moral and other theory-of-mind reasoning tasks. should be free of any consistency or soundness issues. In particular, needs to avoid inconsistencies such as the one demonstrated below, modified from [Govindarajulu and Bringsjord2017a]. In the inference chain below, we have an agent who knows that the murderer is the person who owns the gun. Agent does not know that agent is the murderer, but it’s true that is the owner of the gun. If the knowledge operator is a simple first-order logic predicate, we get the proof shown below, which produces a contradiction from sound premises. [frametitle=Modeling Knowlege (or any Intension) in First-order Logic , frametitlebackgroundcolor=gray!25,linecolor=white,backgroundcolor=gray!10]

 \framebox1  K(a, Murderer(owner(gun))) {\color[rgb]{.5,.5,.5}; given} \framebox2  ¬K(a,Murderer(m)) {\color[rgb]{.5,.5,.5}; given} \framebox4  K(a,Murderer(m)) {\color[rgb]{.5,.5,.5}; first-order inference from \framebox{3} and \framebox{% 1}} \framebox5  ⊥ {\color[rgb]{.5,.5,.5}; first-order inference from \framebox{4} and \framebox{% 2}}

Even more robust representation schemes can still result in such inconsistencies, or at least unsoundness, if the scheme is extensional in nature [Bringsjord and Govindarajulu2012]. Issues such as this arise due to uniform handling of terms that refer to the same object in all contexts. This is prevented if the formal system is a quantified modal logic (and other sufficiently expressive intensional systems). We present one such quantified modal logic below.

## Background: Formal System

In this section, we present the formal system in which we model counterfactual conditionals. The formal system we use is deontic cognitive event calculus (). Arkoudas and Bringsjord ArkoudasAndBringsjord2008Pricai introduced, for their modeling of the false-belief task, the general family of cognitive event calculi to which  belongs.  has been used to formalize and automate highly intensional moral reasoning processes and principles, such as akrasia (giving in to temptation that violates moral principles) [Bringsjord et al.2014]. and the doctrine of double effect described above.666 The description of  here is mostly a subset of the discussion in [Govindarajulu and Bringsjord2017a] relevant for us.

Briefly,  is a sorted (i.e. typed) quantified modal logic (also known as sorted first-order modal logic). The calculus has a well-defined syntax and proof calculus; outlined below. The proof calculus is based on natural deduction [Gentzen1935], commonly used by practicing mathematicians and logicians, as well as to teach logic; the proof calculus includes all the standard introduction and elimination rules for first-order logic, as well as inference schemata for the modal operators and related structures.

### Syntax

is a sorted calculus. A sorted system can be regarded analogous to a typed single-inheritance programming language. We show below some of the important sorts used in . Among these, the Agent, Action, and ActionType sorts are not native to the event calculus.

The syntax can be thought of as having two components: a first-order core and a modal system that builds upon this first-order core. The figures below show the syntax and inference schemata of . The syntax is quantified modal logic. The first-order core of  is the event calculus [Mueller2006]. Commonly used function and relation symbols of the event calculus are included. Other calculi (e.g. the situation calculus) for modeling commonsense and physical reasoning can be easly switched out in-place of the event calculus.

The modal operators present in the calculus include the standard operators for knowledge , belief , desire , intention , etc. The general format of an intensional operator is , which says that agent knows at time the proposition . Here can in turn be any arbitrary formula. Also, note the following modal operators: for perceiving a state, for common knowledge, for agent-to-agent communication and public announcements, for belief, for desire, for intention, and finally and crucially, a dyadic deontic operator that states when an action is obligatory or forbidden for agents. It should be noted that  is one specimen in a family of easily extensible cognitive calculi.

The calculus also includes a dyadic (arity = 2) deontic operator . It is well known that the unary ought in standard deontic logic lead to contradictions. Our dyadic version of the operator blocks the standard list of such contradictions, and beyond.777A overview of this list is given lucidly in [McNamara2010].

[linecolor=white, frametitle=Syntax, frametitlebackgroundcolor=gray!25, backgroundcolor=gray!10, roundcorner=8pt]

 S ::={Agent} | {% ActionType} | {Action}⊑{Event} | % {Moment} | {Fluent} f ::=⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩action:{% Agent}×{ActionType}→{Action}initially:{Fluent}→{Formula}Holds:{Fluent}×{Moment}→{Formula}happens:{Event}×{Moment}→{Formula}clipped:{Moment}×{Fluent}×%Moment→{Formula}initiates:{Event}×{Fluent}×{Moment}→{Formula}terminates:{Event}×{Fluent}×{Moment}→{Formula}prior:{Moment}×{Moment}→{Formula} t ::=x:S | c:S | f(t1,…,tn) ϕ ::=⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩q:{Formula} | ¬ϕ | ϕ∧ψ | ϕ∨ψ | ∀x:ϕ(x) |P(a,t,ϕ) | K(a,t,ϕ) |C(t,ϕ) | S(a,b,t,ϕ) | S(a,t,ϕ) | B(a,t,ϕ)D(a,t,ϕ) | I(a,t,ϕ)O(a,t,ϕ,(¬)happens(action(a∗,α),t′))

### Inference Schemata

The figure below shows a fragment of the inference schemata for . First-order natural deduction introduction and elimination rules are not shown. Inference schemata and let us model idealized systems that have their knowledge and beliefs closed under the  proof theory. While humans are not dedcutively closed, these two rules lets us model more closely how more deliberate agents such as organizations, nations and more strategic actors reason. (Some dialects of cognitive calculi restrict the number of iterations on intensional operators.) states that knowledge of a proposition implies that the proposition holds ties intentions directly to perceptions (This model does not take into account agents that could fail to carry out their intentions). dictates how obligations get translated into known intentions.

[linecolor=white, frametitle=Inference Schemata (Fragment), nobreak=true, frametitlebackgroundcolor=gray!25, backgroundcolor=gray!10, roundcorner=8pt]

 \infer[[RK]]K(a,t2,ϕ)K(a,t1,Γ),  Γ⊢ϕ,  t1≤t2 \infer[[RB]]B(a,t2,ϕ)B(a,t1,Γ),  Γ⊢ϕ,  t1≤t2

### Semantics

The semantics for the first-order fragment is the standard first-order semantics. The truth-functional connectives and quantifiers for pure first-order formulae all have the standard first-order semantics. The semantics of the modal operators differs from what is available in the so-called Belief-Desire-Intention (BDI) logics [Rao and Georgeff1991] in many important ways. For example,  explicitly rejects possible-worlds semantics and model-based reasoning, instead opting for a proof-theoretic semantics and the associated type of reasoning commonly referred to as natural deduction [Gentzen1935, Francez and Dyckhoff2010]. Briefly, in this approach, meanings of modal operators are defined via arbitrary computations over proofs, as we will see for the counterfactual conditional below.

### Reasoner (Theorem Prover)

Reasoning is performed through the first-order modal logic theorem prover, ShadowProver [Govindarajulu and Bringsjord2017a]. While we use the prover in our simulations, describing the prover in more detail is out of scope for the present paper.888The underlying first-order prover is SNARK [Stickel et al.1994].

## The Formal Problem

At a time point , we are given a set of sentences describing what the system (or agent at hand) is working with. This set of sentences can be taken to describe the current state of the world at ; state of the world before , up-to to some horizon in the past; and also desires and beliefs about the future. We are given a counterfactual conditional with possibly (but not always) . We require that our formal model provide us with the following:

[linecolor=white, frametitle= Required Conditions, frametitlebackgroundcolor=gray!25, backgroundcolor=gray!10, roundcorner=8pt]

1. Given a set of formulae , and a sentence of the form , we should be able to tell whether or not .

2. Given a set of formulae , and a sentence of the form , we should be able to tell whether or not , here is either , or .

### On Using the Material Conditional

If we are considering a simple material conditional , then if , then trivially , if the proof calculus subsumes standard propositional logic , as is is the case with logics used in moral reasoning. Another issue is that whether holds is not simply a function of whether holds and holds, that is is not a truth-functional connective, unlike the material conditional .

## Modeling Counterfactual Conditionals

The general intuition is that given , one has to drop some of the formulae in arriving at and then if , we can conclude that . More precisely, can be proven from iff there is a subset of consistent with such that . Since we are using a natural deduction proof calculus, we need to specify introduction and elimination rules for . Let denote that a set of formulae is consistent.

### Extensional Context

[linecolor=white, frametitle= Introduction, frametitlebackgroundcolor=gray!25, backgroundcolor=gray!10, roundcorner=8pt]

 (Γ⊢ϕ↪ψ) ⇔¬Cons[{ϕ}]∨∃Γ′⎧⎪⎨⎪⎩ [I1] Γ′⊆Γ[I2] Con[Γ′+ϕ][I3] (Γ′+ϕ)⊢ψ

The elimination rule for is much simpler and resembles the rule for the material conditional . [linecolor=white, frametitle= Elimination, frametitlebackgroundcolor=gray!25, backgroundcolor=gray!10, roundcorner=8pt]

 Γ+{ϕ,ϕ↪ψ}⊢ψ

### Intensional Context

The inference schema given below apply to the presence of the counterfactual in any arbitrary context . The context for a formula is defined as below. Let denote the empty list and let denote list concatentation.

[linecolor=white, frametitle= Definition of , frametitlebackgroundcolor=gray!25, backgroundcolor=gray!10, roundcorner=8pt]

 Υ[ϕ]=⎧⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪⎩⟨⟩ if the top connective in ϕ is not modal⟨K,a,t⟩,⊕Υ[ψ], if ϕ ≡K(a,t,ψ)⟨B,a,t⟩,⊕Υ[ψ], if ϕ ≡B(a,t,ψ)⟨D,a,t⟩,⊕Υ[ψ], if ϕ ≡D(a,t,ψ)

For example, With the context defined as above, inference schemata for occurring within a modal context is given below.

We also use the following shorthands in the two rules given below:

 (i)Υ[ϕ] denotes that formula ϕ has context Υ. (ii)Υ[Γ] denotes the subset of Γ with context Υ.

[linecolor=white, frametitle= Introduction, frametitlebackgroundcolor=gray!25, backgroundcolor=gray!10, roundcorner=8pt]

 (Γ⊢Υ[ϕ↪ψ]) ⇔ ¬Cons[{ϕ}]∨∃Γ′⎧⎪⎨⎪⎩ [I1] Γ′⊆Υ[Γ][I2] Con[Γ′+ϕ][I3] (Γ′+ϕ)⊢ψ

The elimination rule for under any arbitrary context is given below.

[linecolor=white, frametitle= Elimination, frametitlebackgroundcolor=gray!25, backgroundcolor=gray!10, roundcorner=8pt]

 Γ+{Υ[ϕ],Υ[ϕ↪ψ]}⊢Υ[ψ]

For a simple example of the above rules, see the experiments sections below.

### A Note on Implementing the Introduction Rules

There are two possible dynamic programming algorithms. In the worst case, both the algorithms amount to searching over all possible subsets of a given . In the first algorithm, we start from smaller subsets and compute whether and . For any larger sets , need not be computed. In the second algorithm, we start with larger subsets and compute and . For any smaller sets , need not be computed. If is first-order and above, the second algorithm is preferable. If encompasses first-order logic, then computing for any set is hard in the general case. In our implementation, we approximate by running querying a prover for up to a time limit . As increases, the approximation becomes better.

### Properties of the System

Now we canvass some meta-theorems about the system. All the four inference schemata given above, when added to a monotonic proof theory, preserve monotonicity.

[linecolor=white, frametitle= Theorem 1: Monotonicity Preservation, frametitlebackgroundcolor=gray!25, backgroundcolor=gray!10, nobreak=true ,roundcorner=8pt]

 If ⊢ is monotonic, then ⊢ augmented with Rcf1,Rcf2,Rcf3, and Rcf4 is still monotonic.

Proof Sketch: Notice that the right-hand side of the condition for the introduction rules stay satisfied if we replace with a superset . The elimination rules hold regardless of .

We assume that is monotonic. For monotonic systems, for any , . We show that some desirable properties that hold for other systems in the literature hold for the system presented here.

[linecolor=white, frametitle= Property 1; in [Nute1984], frametitlebackgroundcolor=gray!25, backgroundcolor=gray!10, nobreak=true ,roundcorner=8pt]

 {}⊢ϕ↪ϕ

Proof: If , we are done. Otherwise, since , we can see that , , and  hold.

[linecolor=white, frametitle= Property 2; in [Pollock1976], frametitlebackgroundcolor=gray!25, backgroundcolor=gray!10, nobreak=true ,roundcorner=8pt]

 ({}⊢ϕ→ψ)⇒({}⊢ϕ↪ψ)

Proof: Either or . If the former holds, we are done. If the latter holds, then take and we have and , satisfying all three conditions , , and . letting us introduce .

[linecolor=white, frametitle= Property 3; in [Pollock1976], frametitlebackgroundcolor=gray!25, backgroundcolor=gray!10, nobreak=true ,roundcorner=8pt]

 ({}⊢ϕ→ψ)⇒({}⊢(χ↪ϕ)→(χ↪ψ))

Proof: Assume . We need to prove . If , we succeed. Otherwise, take to be . By the elimination rule we have , and using , we have , thus satisfying our conditions , , and  to introduce .

[linecolor=white, frametitle= Property 4; in [Pollock1976], in [Nute1984], , frametitlebackgroundcolor=gray!25, backgroundcolor=gray!10, nobreak=true ,roundcorner=8pt]

 If Γ⊢ϕ↪ψ, then Γ⊢ϕ→ψ

Proof: , therefore using . Using we have .

[linecolor=white, frametitle= Property 5; in [Nute1984], frametitlebackgroundcolor=gray!25, backgroundcolor=gray!10, nobreak=true ,roundcorner=8pt]

 If Γ⊢(¬ψ)↪ψ, then % Γ⊢ϕ→ψ

Proof: Given , therefore using , and using we have , giving us .

[linecolor=white, frametitle= Property 6, frametitlebackgroundcolor=gray!25, backgroundcolor=gray!10, nobreak=true ,roundcorner=8pt]

 If Γ⊢(ϕ↪ψ)∧(ψ↪ϕ) then Γ⊢(ϕ→χ) ⇔Γ⊢(ψ→χ)

Proof: We prove the left-to-right direction of the biconditional

 Γ⊢(ϕ→χ) ⇒Γ⊢(ψ→χ)

Assume . We need to prove . Since we are given either or there is a such that and . If the former holds, we are finished. If the latter holds, we then have since is monotonic. Using the given and the elimination rule, we obtain . Therefore, .

[linecolor=white, frametitle= Property 7; in [Nute1984], frametitlebackgroundcolor=gray!25, backgroundcolor=gray!10, nobreak=true ,roundcorner=8pt]

 If {}⊢(ϕ↪ψ)∧(ψ↪ϕ) then Γ⊢(ϕ↪χ) ⇔Γ⊢(ψ↪χ)

Proof: We establish the left-to-right direction of the biconditional

 Γ⊢(ϕ↪χ) ⇒Γ⊢(ψ↪χ)

Assuming and using Property 4 and , we have ; and using again we obtain . Assume . We need to prove . Since we assumed either or there is a such that and . If the former holds we are done because . If the latter holds, we have and as .

[linecolor=white, frametitle= Property 8, frametitlebackgroundcolor=gray!25, backgroundcolor=gray!10, nobreak=true ,roundcorner=8pt]

 If {}⊢(ϕ↪ψ)∧(ψ↪ϕ) then Γ⊢(χ↪ϕ) ⇔Γ⊢(χ↪ψ)

Proof: Similar to the previous proof, we prove the left-to-qright direction of the biconditional

 Γ⊢(ϕ↪χ) ⇒Γ⊢(ψ↪χ)

Assuming and using Property 4 and , we deduce ; and using again we infer . Assume . We need to show . Since we assumed either or there is a such that and . If the former, we are done; if the latter, we have .

The next property states that if we are given and , we can reach . For instance, “If a had gone to the doctor, a would not have been sick” and “If a had gone to the doctor, a would be happy now” give us “If a had gone to the doctor, a would not have been sick and a would be happy now”.

One property which has been problematic for many previous formal systems is simplification of disjunctive antecedents (SDA). For instance, given “If a had gone to the doctor or if a had taken medication, a would not be sick now” should give us “If a had gone to the doctor, a would not be sick now, and if a had taken medication, a would not be sick now.” Logics based on possible worlds find it quite challenging to handle this property — yet it seems like a necessary property to secure [Ellis, Jackson, and Pargetter1977], though there are some disagreements on this front [Loewer1976]. As a middle ground we can prove “If a had gone to the doctor, a would not be sick now, or if a had taken medication, a would not be sick now.”

[linecolor=white, frametitle= Property 9, frametitlebackgroundcolor=gray!25, backgroundcolor=gray!10, nobreak=true ,roundcorner=8pt]

 If Γ⊢(ϕ1∨ϕ2)↪ψ then Γ⊢(ϕ1↪ψ)∨(ϕ1↪ψ)

Proof: We need to either prove or and use introduction to get If , we have and , we then reach our conclusion. Otherwise, if , then there is a serving as . We need to prove three clauses:

1.  is satisfied as .

2.  is satisfied because if then at least one of or ; otherwise if and , giving us through , which overthrows our assumption that So either or , then  is satisfied for either or . Assume that it holds for the former.

3.  is satisfied as and we have ; therefore .

With stronger assumptions we can obtain a property that directly resembles SDA.

[linecolor=white, frametitle= Property 10, frametitlebackgroundcolor=gray!25, backgroundcolor=gray!10, nobreak=true ,roundcorner=8pt]

 If (Γ⊢(ϕ1∨ϕ2)↪ψ and Γ⊬¬ϕ1 and Γ⊬¬ϕ2) then Γ⊢(ϕ1↪ψ)∧(ϕ1↪ψ)

Proof: The proof is similar to the previous proof, needing only minor modifications.

[linecolor=white, frametitle= Property 11, in [Pollock1976], frametitlebackgroundcolor=gray!25, backgroundcolor=gray!10, nobreak=true ,roundcorner=8pt]

 If Γ⊢(ϕ↪ψ) and Γ⊢(ϕ↪χ) then Γ⊢(ϕ∧ψ)↪χ

Proof: Since we have , either or there is a such that . If the former holds, we have giving us . If the latter holds, we have . Assume that ; then we get giving us through introduction. With introduction we get , but using and the given , we have . Therefore, . This goes against , giving us that This satisfies , , and   which yields .

Finally, we have the following theorem, the proof of which can be obtained by exploiting all the properties above.

[linecolor=white, frametitle= Theorem 2, frametitlebackgroundcolor=gray!25, backgroundcolor=gray!10, nobreak=true ,roundcorner=8pt] All the above properties hold if every counterfactual is replaced by .

Given the above, we can prove the following, using rules and . The following theorem gives an account of how an agent might assert a counterfactual conditional in a moral/ethical situation. [linecolor=white, frametitle= Theorem 3, frametitlebackgroundcolor=gray!25, backgroundcolor=gray!10, nobreak=true ,roundcorner=8pt]

 (Γ⊢B(a,t,O(a,t,ϕ,χ))∧O(a,t,ϕ,χ)) ⇒(Γ⊢B(a,t,ϕ)↪χ)

## Experiments

We describe two experiments that demonstrate the model in action, the first in moral dilemmas and the second in general situations.999Axioms for both the experiments, data and the reasoning system can be obtained here: https://goo.gl/nDZtWX

### Counterfactual Conditionals in Moral Dilemmas

We look at moral dilemmas used in [Govindarajulu and Bringsjord2017a]. Each dilemma has an axiomatization . Two such dilemmas are axiomatized and studied in [Govindarajulu and Bringsjord2017a]. Assume that an agent is facing a given dilemma . The axiomatization includes ’s knowledge and beliefs about the dilemma. We show that for each , we can get and . Note that and talk about situations. Though event calculus does not directly model situations, we can use fluents to do so. A situation is formalized as an object of sort :

 Situation⊏Object

We have the following additional symbols that tell us when an agent is in a situation and what actions are possible for an agent in a situation at a given time:

 in:{Agent}×{Situation}→{Fluent} Action:{Agent}×{ActionType}×{Situation}×{Moment} →{Formula}

We have the following axiom which states that the only actions that an agent can perform in a situation at a time are those sanctioned by the Action predicate.

 ∀ a:{Agent},α:{ActionType},t:{Moment}. (happens(action(a,α),t)→∃σ:{Situation}.(Holds(in(a,σ),t)∧Action(a,α,σ,t)))

As a warmup to , we state below. Let the current agent be and time be denoted by . The formula below states that the agent believes it is in a situation and desires to be in a situation different from with at least one action type that is not forbidden and does not have any negative effects.101010Note that this is a mere first formulation of a complex mental state and further refinements, simple and drastic, are possible and expected. Here denotes the total utility of a fluent for all agents and all times.

[linecolor=white, frametitle= Formalization of , frametitlebackgroundcolor=gray!25, backgroundcolor=gray!10, roundcorner=8pt]

Let be the -complaint action that the agent is saddled with in the current situation . Let be the inner statement in the desires modal operator above. The statement below formalizes and can be read as the agent believing that if the agent were in a different situation with at least one action that is not forbidden and does not have any negative effects, it would then not perform the action required of it by .

[linecolor=white, frametitle= Formalization of , frametitlebackgroundcolor=gray!25, backgroundcolor=gray!10, roundcorner=8pt]

 B(I,t,Θ(σ,t)↪¬happens(action(I,αD),t+1))

We derive the above two conditions from: (1) axioms describing situations; and (2) common knowledge dictating that agents should only perform non-forbidden actions and in a non-dilemma situation will peform an action with only positive effects. takes around and takes around .

### Evaluation of the System

We demonstrate the feasibility of our system by showing the model working for a small dataset of representative problems. There are problems each with its own set of assumptions . For each problem , we have a statement that is provably counterfactual, that is . Using this statement, we build three conditionals: (1) a counterfactual conditional ; (2) an absurd material conditional ; and (3) an absurd counterfactual . The number of premises range from 2 to 15. One simple problem is shown below:

 Γ={∀x:(Human(x)→Mortal(x)),Human(socrates)} Γ⊢¬Mortal(socrates)↪¬Human(socrates)Γ⊢¬Mortal(socrates)→(P∧¬P)Γ⊬¬Mortal(socrates)↪(P∧¬P)

The table below shows how much time proving the true and false counterfactuals takes when compared with proving the absurd material conditional for the same problem. As expected, the counterfactual conditional takes more time than the material conditional. The absurd counterfactual is merely intended as a sanity check and the large reasoning times are expected (mainly due to timeouts), and is not expected to be a common use case.

Formula Mean (s) Min (s) Max (s)
2.496 0.449 11.14
0.169 0.089 0.341
19.7 1.93 120.67

## Conclusion & Future Work

We have presented a novel formal model for counterfactual conditionals. We have applied this model to complete a formalization of an important ethical principle, the doctrine of double effect. We have also provided an implementation of a reasoning system and a dataset with counterfactual conditionals. There are three main threads for future work. First, the reasoning algorithm is quite simple and can be improved by looking at either hand-built or learned heuristics. Second, we hope to leverage recent work in developing an uncertainty system for a cognitive calculus

[Govindarajulu and Bringsjord2017b]. Finally, we hope to build a more robust and extensive dataset of counterfactual reasoning validated by multiple judgements from humans.

## References

• [Arkoudas and Bringsjord2008] Arkoudas, K., and Bringsjord, S. 2008. Toward Formalizing Common-Sense Psychology: An Analysis of the False-Belief Task. In Ho, T.-B., and Zhou, Z.-H., eds., Proceedings of the Tenth Pacific Rim International Conference on Artificial Intelligence (PRICAI 2008)

, number 5351 in Lecture Notes in Artificial Intelligence (LNAI), 17–29.

Springer-Verlag.
• [Bringsjord and Govindarajulu2012] Bringsjord, S., and Govindarajulu, N. S. 2012. Given the Web, What is Intelligence, Really? Metaphilosophy 43(4):361–532. This URL is to a preprint of the paper.
• [Bringsjord et al.2014] Bringsjord, S.; Govindarajulu, N. S.; Thero, D.; and Si, M. 2014. Akratic Robots and the Computational Logic Thereof. In Proceedings of ETHICS 2014 (2014 IEEE Symposium on Ethics in Engineering, Science, and Technology), 22–29. IEEE Catalog Number: CFP14ETI-POD.
• [Bringsjord1985] Bringsjord, S. 1985. Are There Set-Theoretic Worlds? Analysis 45(1):64.
• [Bringsjord2015] Bringsjord, S. 2015. A 21st-Century Ethical Hierarchy for Humans and Robots: . In Ferreira, I.; Sequeira, J.; Tokhi, M.; Kadar, E.; and Virk, G., eds., A World With Robots: International Conference on Robot Ethics (ICRE 2015). Berlin, Germany: Springer. 47–61.
• [Ellis, Jackson, and Pargetter1977] Ellis, B.; Jackson, F.; and Pargetter, R. 1977. An Objection to Possible-world Semantics for Counterfactual logics. Journal of Philosophical Logic 6(1):355–357.
• [Francez and Dyckhoff2010] Francez, N., and Dyckhoff, R. 2010. Proof-theoretic Semantics for a Natural Language Fragment. Linguistics and Philosophy 33:447–477.
• [Fritz and Goodman2017] Fritz, P., and Goodman, J. 2017. Counterfactuals and Propositional Contingentism. Review of Symbolic Logic 10(3):509–529.
• [Gentzen1935] Gentzen, G. 1935. Investigations into Logical Deduction. In Szabo, M. E., ed., The Collected Papers of Gerhard Gentzen. Amsterday, The Netherlands: North-Holland. 68–131. This is an English version of the well-known 1935 German version.
• [Govindarajulu and Bringsjord2017a] Govindarajulu, N. S., and Bringsjord, S. 2017a. On Automating the Doctrine of Double Effect. In Sierra, C., ed., Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17, 4722–4730. Preprint available at this url: https://arxiv.org/abs/1703.08922.
• [Govindarajulu and Bringsjord2017b] Govindarajulu, N. S., and Bringsjord, S. 2017b. Strength Factors: An Uncertainty System for a Quantified Modal Logic.

Presented at Workshop on Logical Foundations for Uncertainty and Machine Learning at IJCAI 2017, Melbourne, Australia.

• [Lehner2017] Lehner, P. 2017. Forecasting counterfactuals in uncontrolled settings. webpage.
• [Lewis1973] Lewis, D. 1973. Counterfactuals. Oxford, UK: Blackwell.
• [Loewer1976] Loewer, B. 1976. Counterfactuals with disjunctive antecedents. The Journal of Philosophy 73(16):531–537.
• [Mares and Fuhrmann1995] Mares, E. D., and Fuhrmann, A. 1995. A relevant theory of conditionals. Journal of Philosophical Logic 24(6):645–665.
• [Mares2004] Mares, E. D. 2004. Relevant Logic: A Philosophical Interpretation. Cambridge University Press.
• [Mares2014] Mares, E. 2014. Relevance logic. In Zalta, E. N., ed., The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University, spring 2014 edition.
• [McNamara2010] McNamara, P. 2010. Deontic Logic. In Zalta, E., ed., The Stanford Encyclopedia of Philosophy. McNamara’s (brief) note on a paradox arising from Kant’s Law is given in an offshoot of the main entry.
• [Migliore et al.2014] Migliore, S.; Curcio, G.; Mancini, F.; and Cappa, S. F. 2014. Counterfactual thinking in moral judgment: An experimental study. Frontiers in Psychology 5.
• [Mueller2006] Mueller, E. 2006. Commonsense Reasoning: An Event Calculus Based Approach. San Francisco, CA: Morgan Kaufmann. This is the first edition of the book. The second edition was published in 2014.
• [Nute1984] Nute, D. 1984. Conditional Logic. In Gabay, D., and Guenthner, F., eds., Handbook of Philosophical Logic Volume II: Extensions of Classical Logic. Dordrecht, The Netherlands: D. Reidel. 387–439.
• [Pereira and Saptawijaya2016] Pereira, L. M., and Saptawijaya, A. 2016.

Counterfactuals, Logic Programming and Agent Morality.

In Rahman, S., and Redmond, J., eds., Logic, Argumentation and Reasoning. Springer. 85–99.
• [Pollock1976] Pollock, J. 1976. Subjunctive Reasoning. Dordrecht, Holland & Boston, USA: D. Reidel.
• [Rao and Georgeff1991] Rao, A. S., and Georgeff, M. P. 1991. Modeling Rational Agents Within a BDI-architecture. In Fikes, R., and Sandewall, E., eds., Proceedings of Knowledge Representation and Reasoning (KR&R-91), 473–484. San Mateo, CA: Morgan Kaufmann.
• [Son et al.2017] Son, Y.; Buffone, A.; Raso, J.; Larche, A.; Janocko, A.; Zembroski, K.; Schwartz, H. A.; and Ungar, L. 2017. Recognizing Counterfactual Thinking in Social Media Texts. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, 654–658.
• [Stickel et al.1994] Stickel, M.; Waldinger, R.; Lowry, M.; Pressburger, T.; and Underwood, I. 1994. Deductive Composition of Astronomical Software From Subroutine Libraries. In Proceedings of the Twelfth International Conference on Automated Deduction (CADE–12), 341–355. SNARK can be obtained at the url provided here.
• [Zalta1988] Zalta, E. N. 1988. Intensional Logic and the Metaphysics of Intentionality. Cambridge, MA: MIT Press.