# Blameworthiness in Games with Imperfect Information

Blameworthiness of an agent or a coalition of agents is often defined in terms of the principle of alternative possibilities: for the coalition to be responsible for an outcome, the outcome must take place and the coalition should have had a strategy to prevent it. In this paper we argue that in the settings with imperfect information, not only should the coalition have had a strategy, but it also should have known that it had a strategy, and it should have known what the strategy was. The main technical result of the paper is a sound and complete bimodal logic that describes the interplay between knowledge and blameworthiness in strategic games with imperfect information.

## Authors

• 18 publications
• 8 publications
• ### Knowledge and Blameworthiness

Blameworthiness of an agent or a coalition of agents is often defined in...
11/05/2018 ∙ by Pavel Naumov, et al. ∙ 0

• ### Blameworthiness in Strategic Games

There are multiple notions of coalitional responsibility. The focus of t...
09/14/2018 ∙ by Pavel Naumov, et al. ∙ 0

• ### An Introduction to Imperfect Competition via Bilateral Oligopoly

The aim of this paper is threefold. First, we provide a unified framewor...
03/24/2018 ∙ by Alex Dickson, et al. ∙ 0

• ### Intelligence in Strategic Games

The article considers strategies of coalitions that are based on intelli...
10/16/2019 ∙ by Pavel Naumov, et al. ∙ 0

• ### The Complexity of Sequential Routing Games

We study routing games where every agent sequentially decides her next e...
08/03/2018 ∙ by Anisse Ismaili, et al. ∙ 0

• ### Two-way Greedy: Algorithms for Imperfect Rationality

The realization that selfish interests need to be accounted for in the d...
07/23/2020 ∙ by Diodato Ferraioli, et al. ∙ 0

• ### The Limits of Morality in Strategic Games

A coalition is blameable for an outcome if the coalition had a strategy ...
01/22/2019 ∙ by Rui Cao, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1. Introduction

In this paper we study blameworthiness of agents and their coalitions in multiagent systems. Throughout centuries, blameworthiness, especially in the context of free will and moral responsibility, has been at the focus of philosophical discussions Singer and Eddon (2013). These discussions continue in the modern time Fields (1994); Fischer and Ravizza (2000); Nichols and Knobe (2007); Mason (2015); Widerker (2017). Frankfurt acknowledges that a dominant role in these discussions has been played by what he calls a principle of alternate possibilities: “a person is morally responsible for what he has done only if he could have done otherwise" Frankfurt (1969). As with many general principles, this one has many limitations that Frankfurt discusses; for example, when a person is coerced into doing something. Following the established tradition Widerker (2017), we refer to this principle as the principle of alternative possibilities.

Others refer to an alternative possibility as a counterfactual possibility Cushman (2015); Halpern (2016). Halpern and Pearl proposed several versions of a formal definition of causality as a relation between sets of variables that include a counterfactual requirement Halpern (2016). Halpern and Kleiman-Weiner used a similar setting to define degrees of blameworthiness Halpern and Kleiman-Weiner (2018). Batusov and Soutchanski gave a counterfactual-based definition of causality in situation calculus Batusov and Soutchanski (2018). Alechina, Halpern, and Logan applied counterfactual definition of causality to team plans Alechina et al. (2017).

Although the principle of alternative possibilities makes sense in the settings with perfect information, it needs to be adjusted for settings with imperfect information. Indeed, consider a traffic situation depicted in Figure 1. A self-driving truck and a regular car are approaching an intersection at which truck must stop to yield to car . The truck is experiencing a sudden brake failure and it cannot stop, nor can it slow down at the intersection. The truck turns on flashing lights and sends distress signals to other self-driving cars by radio. The driver of car can see the flashing lights, but she does not receive the radio signal. She can also observe that the truck does not slow down. The driver of car has two potential strategies to avoid a collision with the truck: to slow down or to accelerate.

The driver understands that one of these two strategies will succeed, but since she does not know the exact speed of the truck, she does not know which of the two strategies will succeed. Suppose that the collision could be avoided if the car accelerates, but the car driver decides to slow down. The vehicles collide. According to the principle of alternative possibilities, the driver of the car is responsible for the collision because she had a strategy to avoid the collision but did not use it.

It is not likely, however, that a court will find the driver of car responsible for the accident. For example, US Model Penal Code Institute (rint) distinguishes different forms of legal liability as different combinations of “guilty actions” and “guilty mind”. The situation in our example falls under strict liability (pure “guilty actions” without an accompanied “guilty mind”). In many situations, strict liability does not lead to legal liability.

In this paper we propose a formal semantics of blameworthiness in strategic games with imperfect information. According to this semantics, an agent (or a coalition of agents) is blamable for if is true and the agent knew how to prevent . In our example, since the drive of the car does not know that she must accelerate in order to avoid the collision, she cannot be blamed for the collision. We write this as:

Now, consider a similar traffic situation in which car is a self-driving vehicle. The car receives the distress signal from truck , which contains the truck’s exact speed. From this information, car determines that it can avoid the collision if it accelerates. However, if the car slows down, then the vehicles collide and the self-driving car is blameable for the collision:

The main technical result of this paper is a bimodal logical system that describes the interplay between knowledge and blameworthiness of coalitions in strategic games with imperfect information.

## 2. Related Literature

Although the study of responsibility and blameworthiness has a long history in philosophy, the use of formal logical systems to capture these notions is a recent development. Xu proposed a complete logical system for reasoning about responsibility of individual agents in multiagent systems Xu (1998). His approach was extended to coalitions by Broersen, Herzig, and Troquard Broersen et al. (2009). The definition of responsibility in these works is different from ours. They assume that an agent or a coalition of agents is responsible for an outcome if the actions that they took unavoidably lead to the outcome. Thus, their definition is not based on the principle of alternative possibilities.

Halpern and Pearl gave several versions of a formal definition of causality between sets of variables using counterfactuals Halpern (2016). Lorini and Schwarzentruber Lorini and Schwarzentruber (2011) observed that a variation of this definition can be captured in STIT logic Belnap and Perloff (1990); Horty (2001); Horty and Belnap (1995); Horty and Pacuit (2017); Olkhovikov and Wansing (2018). They said that there is a counterfactual dependence between actions of a coalition and an outcome if is true and the complement of the coalition had no strategy to force . In their notations: where is the set of all agents. They also observed that many human emotions (regret, rejoice, disappointment, elation) can be expressed through a combination of the modality and the knowledge modality.

The game-like setting of this paper closely resembles the semantics of Mark Pauly’s logic of coalition power Pauly (2001, 2002). His approach has been widely investigated in the literature Goranko (2001); van der Hoek and Wooldridge (2005); Borgo (2007); Sauro et al. (2006); Ågotnes et al. (2010, 2009); Belardinelli (2014); Goranko et al. (2013); Alechina et al. (2011); Galimullin and Alechina (2017); Goranko and Enqvist (2018); Naumov and Ros (2018). Logics of coalition power study modality that express what a coalition can do. We modified Mark Pauly’s semantics to express what a coalition could have done Naumov and Tao (2018a). We axiomatized a logic that combines statements “ is true” and “coalition could have prevented ” into a single modality .

In this paper we replace “coalition could have prevented ” in Naumov and Tao (2018a) with “coalition knew how it could have prevented ”. The distinction between an agent having a strategy, knowing that the strategy exists, and knowing what the strategy is has been studied before. While Jamroga and Ågotnes talked about “knowledge to identify and execute a strategy" Jamroga and Ågotnes (2007), Jamroga and van der Hoek discussed “difference between an agent knowing that he has a suitable strategy and knowing the strategy itself" Jamroga and van der Hoek (2004). Van Benthem called such strategies “uniform" van Benthem (2001). Broersen talked about “knowingly doing” Broersen (2008), while Broersen, Herzig, and Troquard discussed modality “know they can do” Broersen et al. (2009). We used term “executable strategy" Naumov and Tao (2017a). Wang talked about “knowing how" Wang (2015, 2016). The properties of know-how as a modality have been previously axiomatized in different settings Ågotnes and Alechina (2016); Naumov and Tao (2017a); Fervari et al. (2017); Naumov and Tao (2017b, 2018d, 2018c, 2018b); Wang (2015, 2016).

The axioms of the logical system proposed in this paper are very similar to the axioms in Naumov and Tao (2018a) for blameworthiness in games with perfect information and so are the proofs of soundness of these axioms. The most important contribution of this paper is the proof of completeness, in which the construction from Naumov and Tao (2018a) is significantly modified to incorporate distributed knowledge. These modifications are discussed in the beginning of Section 8. The increment from Naumov and Tao (2018a) to the current paper is similar to the one from original Mark Pauly’s logic of coalition power Pauly (2001, 2002) to more recent works on the interplay of knowledge and know-how modalities Ågotnes and Alechina (2016); Naumov and Tao (2017a); Fervari et al. (2017); Naumov and Tao (2017b, 2018d, 2018c, 2018b).

## 3. Outline

The paper is organized as follows: Section 4 presents the formal syntax and semantics of our logical system. Section 5 introduces our axioms and compares them to those in the related works. Section 6 gives examples of formal derivations in the proposed logical system. Sections 7 and 8 prove the soundness and the completeness of our system. Section 9 concludes with a discussion of future work.

## 4. Syntax and Semantics

In this paper we assume a fixed set of agents and a fixed set of propositional variables . By a coalition we mean an arbitrary subset of set .

###### Definition .

is the minimal set of formulae such that

1. for each variable ,

2. for all formulae ,

3. , for each coalition and each .

In other words, language is defined by grammar:

 φ:=p|¬φ|φ→φ|KCφ|BCφ.

Formula is read as “coalition distributively knew before the actions were taken that statement would be true” and formula as “coalition is blamable for ”.

Boolean connectives , , and as well as constants and are defined in the standard way. By formula we mean . For the disjunction of multiple formulae, we assume that parentheses are nested to the left. That is, formula is a shorthand for . As usual, the empty disjunction is defined to be . For any two sets and , by we denote the set of all functions from to .

The formal semantics of modalities and is defined in terms of models, which we call games. These are one-shot strategic games with imperfect information. We specify the set of actions by all agents, or a complete action profile, as a function from the set of all agents to the set of all actions .

###### Definition .

A game is a tuple , where

1. is a set of “initial states”,

2. is an “indistinguishability” equivalence relation on set ,

3. is a nonempty set of “actions”,

4. is a set of “outcomes”,

5. the set of “plays” is an arbitrary set of tuples such that , , and ,

6. is a function that maps into subsets of .

In the introductory example, the set has two states high and low, corresponding to the truck going at a high or low speed. The drive of the regular car cannot distinguish these two states while these states can be distinguished by a self-driving version of car . For the sake of simplicity, assume that there are two actions that car can take: and only two possible outcomes: . Vehicles collide if either the truck goes with a low speed and the car decides to slow-down or the truck goes with a high speed and the car decides to accelerate. In our case there is only one agent (car ), so the complete action profile can be described by giving just the action of this agent. We refer to the two complete action profiles in this situation simply as profile slow-down and profile speed-up.

The list of all possible scenarios (or “plays”) is given by the set

 P = {(\em high,\em speed-up,\em collision),(\em high,\em slow-down,\em no collision), {(\em low,\em speed-up,\em no collision),(\em low,\em slow-down,\em collision)}.

Note that in our example an initial state and an action profile uniquely determine the outcome. In general, we allow nondeterministic games where this does not have to be true. We also do not require that, for any initial state and any complete action profile, there is at least one outcome. In other words, in certain situations we allow agents to terminate the game without reaching an outcome. This is a more general setting and it minimizes the list of axioms. If one wishes not to consider such games, an additional axiom should be added to the logical system without any major changes in the proof of the completeness.

Whether statement is true or false depends not only on the outcome but also on the initial state of the game. Indeed, coalition might have known how to prevent in one initial state but not in the other. For this reason, we assume that all statements are true or false for a particular play of the game. For example, propositional variable can stand for “car slowed down and collided with truck going at a high speed”. As a result, function in the definition above maps into subsets of rather than subsets of .

By an action profile of a coalition we mean an arbitrary function that assigns an action to each member of the coalition. If and are action profiles of coalitions and , respectively, and is any coalition such that , then we write to denote that for each agent .

Next is the key definition of this paper. Its item 5 formally specifies blameworthiness using the principle of alternative possibilities. In order for a coalition to be blamable for , not only must be true and the coalition should have had a strategy to prevent , but this strategy should work in all initial states that the coalition cannot distinguish from the current state. In other words, the coalition should have known the strategy.

###### Definition .

For any game , any formula , and any play , the satisfiability relation is defined recursively as follows:

1. if , where ,

2. if ,

3. if or ,

4. if for each play such that ,

5. if and there is an action profile of coalition such that for each play , if and , then .

Since modality represents a priori (before the actions) knowledge of coalition , only the initial states in plays and are indistinguishable in item (4) of Definition 4.

Note that in part 5 of the above definition we do not assume that coalition is a minimal one that knew how to prevent the outcome. This is different from the definition of blameworthiness in Halpern (2017). Our approach is consistent with how word “blame” is often used in English. For example, the sentence “Millennials being blamed for decline of American cheese” Gant (2018) does not imply that no one in the millennial generation likes American cheese.

## 5. Axioms

In addition to the propositional tautologies in language , our logical system contains the following axioms.

1. Truth: and ,

2. Distributivity: ,

3. Negative Introspection: ,

4. Monotonicity: and ,
where ,

5. None to Blame: ,

6. Joint Responsibility: if , then
,

7. Blame for Known Cause:
,

8. Knowledge of Fairness: .

We write if formula is provable from the axioms of our system using the Modus Ponens and the Necessitation inference rules:

 φ,φ→ψψ,φKCφ.

We write if formula is provable from the theorems of our logical system and an additional set of axioms using only the Modus Ponens inference rule.

The Truth, the Distributivity, the Negative Introspection, and the Monotonicity axioms for epistemic modality are the standard S5 axioms from the logic of distributed knowledge. The Truth axiom for blameworthiness modality states that a coalition could only be blamed for something true. The Monotonicity axiom for the blameworthiness modality states that if a part of a coalition is blamable for something, then the whole coalition is also blamable for the same thing. The None to Blame axiom says that an empty coalition can be blamed for nothing.

The remaining three axioms describe the interplay between knowledge and blameworthiness modalities.

The Joint Responsibility axiom says that if a coalition cannot exclude a possibility of being blamable for , a coalition cannot exclude a possibility of being blamable for , and the disjunction is true, then the joint coalition is blamable for the disjunction. This axiom resembles Xu’s axiom for the independence of individual agents Xu (1998),

 ¯¯¯¯NBa1φ1∧⋯∧¯¯¯¯NBanφn→¯¯¯¯N(Ba1φ1∧⋯∧Banφn),

where modality is an abbreviation for and formula stands for “formula is universally true in the given model”. Broersen, Herzig, and Troquard Broersen et al. (2009) captured the independence of disjoint coalitions and in their Lemma 17:

 ¯¯¯¯NBCφ∧¯¯¯¯NBDψ→¯¯¯¯N(BCφ∧BDψ).

In spite of certain similarity, the definition of responsibility used in Xu (1998) and Broersen et al. (2009) does not assume the principle of alternative possibilities. The Joint Responsibility axiom is also similar to Marc Pauly’s Cooperation axiom for the logic of coalitional power Pauly (2001, 2002):

 SCφ∧SDψ→SC∪D(φ∧ψ),

where coalitions and are disjoint and stands for “coalition has a strategy to achieve ". Finally, The Joint Responsibility axiom in this paper is a generalization of the Joint Responsibility axiom for games with perfect information Naumov and Tao (2018a):

 ¯¯¯¯NBCφ∧¯¯¯¯NBDψ→(φ∨ψ→BC∪D(φ∨ψ)),

where coalitions and are disjoint.

We proposed the Blame for Cause axiom for the games with perfect information Naumov and Tao (2018a):

 N(φ→ψ)→(BCψ→(φ→BCφ)).

This axiom is interpreted as “if formula universally implies (informally, is a cause of ), then any coalition blamable for should also be blamable for the cause as long as is actually true." The Blame for Known Cause axiom generalizes this principle to the games with imperfect information.

The Knowledge of Fairness axiom also goes back to one of axioms for the games with perfect information. The Fairness axiom

 BCφ→N(φ→BCφ)

states “if a coalition is blamed for , then it should be blamed for whenever is true” Naumov and Tao (2018a). The Knowledge of Fairness axiom states that if a coalition is blamable for in an imperfect information game, then it knows that it is blamable for whenever is true.

## 6. Examples of Derivations

We prove soundness of the axioms of our logical system in the next section. Here we prove several lemmas about our formal system that will be used later in the proof of the completeness. All of these lemmas, stated for modality instead of modality originally appeared in Naumov and Tao (2018a).

.

###### Proof.

Note that by the Knowledge of Fairness axiom. Thus, , by the law of contrapositive. Then, by the Necessitation inference rule. Hence, by the Distributivity axiom and the Modus Ponens inference rule,

 ⊢KC¬KC(φ→BCφ)→KC¬BCφ.

At the same time, by the Negative Introspection axiom:

 ⊢¬KC(φ→BCφ)→KC¬KC(φ→BCφ).

Then, by the laws of propositional reasoning,

 ⊢¬KC(φ→BCφ)→KC¬BCφ.

Thus, by the law of contrapositive,

 ⊢¬KC¬BCφ→KC(φ→BCφ).

Since is an instance of the Truth axiom, by propositional reasoning,

 ⊢¬KC¬BCφ→(φ→BCφ).

Therefore, by the definition of . ∎

If , then .

###### Proof.

By the Blame for Known Cause axiom,

 ⊢KC(ψ→φ)→(BCφ→(ψ→BCψ)).

Assumption implies by the laws of propositional reasoning. Hence, by the Necessitation inference rule. Thus, by the Modus Ponens rule, Then, by the laws of propositional reasoning,

 ⊢(BCφ→ψ)→(BCφ→BCψ). (1)

Observe that by the Truth axiom. Also, by the assumption of the lemma. Then, by the laws of propositional reasoning, . Therefore, by the Modus Ponens inference rule from statement (1). ∎

.

###### Proof.

By the Truth axioms, . Hence, by the law of contrapositive, . Thus, by the definition of the modality . Therefore, by the Modus Ponens inference rule. ∎

The next lemma generalizes the Joint Responsibility axiom from two coalitions to multiple coalitions.

###### Lemma

For any integer and any pairwise disjoint sets ,

 {¯¯¯¯KDiBDiχi}ni=1,χ1∨⋯∨χn⊢BD1∪⋯∪Dn(χ1∨⋯∨χn).
###### Proof.

We prove the lemma by induction on . If , then disjunction is Boolean constant false . Hence, the statement of the lemma, , is provable in the propositional logic.

Next, assume that . Then, from Lemma 6 using Modus Ponens rule twice, we get .

Assume now that . By the Joint Responsibility axiom and the Modus Ponens inference rule,

 ¯¯¯¯KD1∪⋯∪Dn−1BD1∪⋯∪Dn−1(χ1∨⋯∨χn−1),¯¯¯¯KDnBDnχn, χ1∨⋯∨χn−1∨χn⊢BD1∪⋯∪Dn−1∪Dn(χ1∨⋯∨χn−1∨χn).

Hence, by Lemma 6,

 BD1∪⋯∪Dn−1(χ1∨⋯∨χn−1),¯¯¯¯KDnBDnχn,χ1∨⋯∨χn−1∨χn ⊢BD1∪⋯∪Dn−1∪Dn(χ1∨⋯∨χn−1∨χn).

At the same time, by the induction hypothesis,

 {¯¯¯¯KDiBDiχi}n−1i=1,χ1∨⋯∨χn−1⊢BD1∪⋯∪Dn−1(χ1∨⋯∨χn−1).

Thus,

 {¯¯¯¯KDiBDiχi}ni=1,χ1∨⋯∨χn−1,χ1∨⋯∨χn−1∨χn ⊢BD1∪⋯∪Dn−1∪Dn(χ1∨⋯∨χn−1∨χn).

Note that is provable in the propositional logic. Thus,

 {¯¯¯¯KDiBDiχi}ni=1,χ1∨⋯∨χn−1 ⊢BD1∪⋯∪Dn−1∪Dn(χ1∨⋯∨χn−1∨χn). (2)

Similarly, by the Joint Responsibility axiom and the Modus Ponens inference rule,

 ¯¯¯¯KD1BD1χ1,¯¯¯¯KD2∪⋯∪DnBD2∪⋯∪Dn(χ2∨⋯∨χn), χ1∨(χ2∨⋯∨χn)⊢BD1∪⋯∪Dn−1∪Dn(χ1∨(χ2∨⋯∨χn)).

Because formula is provable in the propositional logic, by Lemma 6,

 ¯¯¯¯KD1BD1χ1,¯¯¯¯KD2∪⋯∪DnBD2∪⋯∪Dn(χ2∨⋯∨χn), χ1∨χ2∨⋯∨χn⊢BD1∪⋯∪Dn−1∪Dn(χ1∨χ2∨⋯∨χn).

Hence, by Lemma 6,

 ¯¯¯¯KD1BD1χ1,BD2∪⋯∪Dn(χ2∨⋯∨χn),χ1∨χ2∨⋯∨χn ⊢BD1∪⋯∪Dn−1∪Dn(χ1∨χ2∨⋯∨χn).

At the same time, by the induction hypothesis,

 {¯¯¯¯KDiBDiχi}ni=2,χ2∨⋯∨χn⊢BD2∪⋯∪Dn(χ2∨⋯∨χn).

Thus,

 {¯¯¯¯KDiBDiχi}ni=1,χ2∨⋯∨χn,χ1∨χ2∨⋯∨χn ⊢BD1∪D2∪⋯∪Dn(χ1∨χ2∨⋯∨χn).

Note that is provable in the propositional logic. Thus,

 {¯¯¯¯KDiBDiχi}ni=1,χ2∨⋯∨χn ⊢BD1∪⋯∪Dn−1∪Dn(χ1∨χ2∨⋯∨χn). (3)

Finally, note that the following statement is provable in the propositional logic for ,

 ⊢χ1∨⋯∨χn→(χ1∨⋯∨χn−1)∨(χ2∨⋯∨χn).

Therefore, from statement (2) and statement (3)

 {¯¯¯¯KDiBDiχi}ni=1,χ1∨⋯∨χn⊢BD1∪⋯∪Dn(χ1∨⋯∨χn).

by the laws of propositional reasoning. ∎

If , then .

###### Proof.

By the deduction lemma applied times, assumption implies that Thus, by the Necessitation inference rule,

 ⊢KC(φ1→(φ2→…(φn→ψ)…)).

Hence, by the Distributivity axiom and the Modus Ponens rule,

 ⊢KCφ1→KC(φ2→…(φn→ψ)…).

Then, again by the Modus Ponens rule,

 KCφ1⊢KC(φ2→…(φn→ψ)…).

Therefore, by applying the previous steps more times. ∎

The following lemma states a well-known principle in epistemic logic. The proof of this principle can be found, for example, in Naumov and Tao (2018d).

###### Lemma (Positive Introspection)

. ∎

Our last example rephrases Lemma 6 into the form which is used in the proof of the completeness.

###### Lemma

For any and any disjoint sets ,

 {¯¯¯¯KDiBDiχi}ni=1,KC(φ→χ1∨⋯∨χn)⊢KC(φ→BCφ).
###### Proof.

By Lemma 6,

 {¯¯¯¯KDiBDiχi}ni=1,χ1∨⋯∨χn⊢BD1∪⋯∪Dn(χ1∨⋯∨χn).

Hence, by the Monotonicity axiom,

 {¯¯¯¯KDiBDiχi}ni=1,χ1∨⋯∨χn⊢BC(χ1∨⋯∨χn).

Thus, by the Modus Ponens inference rule

 {¯¯¯¯KDiBDiχi}ni=1,φ,φ→χ1∨⋯∨χn⊢BC(χ1∨⋯∨χn).

By the Truth axiom and the Modus Ponens inference rule,

 {¯¯¯¯KDiBDiχi}ni=1,φ,KC(φ→χ1∨⋯∨χn)⊢BC(χ1∨⋯∨χn).

The following formula is an instance of the Blame for Known Cause axiom . Hence, by the Modus Ponens inference rule applied twice,

 {¯¯¯¯KDiBDiχi}ni=1,φ,KC(φ→χ1∨⋯∨χn)⊢φ→BCφ.

By the Modus Ponens inference rule,

 {¯¯¯¯KDiBDiχi}ni=1,φ,KC(φ→χ1∨⋯∨χn)⊢BCφ.

By the deduction lemma,

 {¯¯¯¯KDiBDiχi}ni=1,KC(φ→χ1∨⋯∨χn)⊢φ→BCφ.

By Lemma 6,

 {KC¯¯¯¯KDiBDiχi}ni=1,KCKC(φ→χ1∨⋯∨χn)⊢KC(φ→BCφ).

By the Monotonicity axiom, the Modus Ponens inference rule, and the assumption ,

 {KDi¯¯¯¯KDiBDiχi}ni=1,KCKC(φ→χ1∨⋯∨χn)⊢KC(φ→BCφ).

By the definition of modality , the Negative Introspection axiom, and the Modus Ponens inference rule,

 {¯¯¯¯KDiBDiχi}ni=1,KCKC(φ→χ1∨⋯∨χn)⊢KC(φ→BCφ).

Therefore, by Lemma 6 and the Modus Ponens inference rule, the statement of the lemma follows. ∎

## 7. Soundness

The epistemic part of the Truth axiom as well as the Distribitivity, the Negative Introspection, and the Monotonicity axioms are the standard axioms of epistemic logic S5 for distributed knowledge. Their soundness follows from the assumption that is an equivalence relation in the standard way Fagin et al. (1995). The soundness of the blameworthiness part of the Truth axiom and of the Monotonicity axiom immediately follows from Definition 4. In this section, we prove the soundness of each of the remaining axioms as a separate lemma. In these lemmas, are coalitions, are formulae, and is a play of a game .

.

###### Proof.

Assume that . Hence, by Definition 4, we have and there is an action profile such that for each play , if and , then .

Let , , and . Since and , by the choice of action profile we have . Then, , which leads to a contradiction. ∎

###### Lemma

If , , , and , then .

###### Proof.

Suppose that and . Hence, by Definition 4 and the definition of modality , there are plays and such that , , and .

Statement , by Definition 4, implies that there is a profile such that for each play , if and , then .

Similarly, statement , by Definition 4, implies that there is an action profile such that for each play , if and , then .

Consider an action profile of coalition such that

 s(a)={s1(a), if a∈C,s2(a), if a∈D.

The action profile is well-defined because sets and are disjoint by the assumption of the lemma.

The choice of action profiles , , and implies that for each play , if and , then and . Thus, if and , then , for each play . Therefore, by Definition 4 and the assumption of the lemma. ∎

###### Lemma

If , , and , then .

###### Proof.

By Definition 4, assumption implies that for each play of the game if , then .

By Definition 4, assumption implies that there is an action profile such that for each play , if and , then .

Hence, for each play , if and , then . Therefore, by Definition 4 and the assumption of the lemma. ∎

If , then .

###### Proof.

By Definition 4, assumption implies that there is an action profile such that for each play , if and , then .

Let be a play where and . By Definition 4, it suffices to show that .

Consider any play such that and . Then, since is an equivalence relation, assumptions and imply . Thus, by the choice of action profile . Therefore, by Definition 4 and the assumption . ∎

## 8. Completeness

In this section we prove the completeness of our logical system. The standard completeness proof for epistemic logic of individual knowledge defines states as maximal consistent sets. Similarly, in Naumov and Tao (2018a), we define outcomes of the game as maximal consistent sets. In the case of the epistemic logic of distributed knowledge, two states are usually defined to be indistinguishable by an agent if these two states have the same formulae. Unfortunately, this approach does not work for distributed knowledge. Indeed, two maximal consistent sets that have the same and formulae might have different formulae. Such two states would be indistinguishable to agent and agent , however, the distributed knowledge of agents and in these states will be different. This situation is inconsistent with Definition 4. To solve this problem we define outcomes not as maximal consistent sets of formulae, but as nodes of a tree. This approach has been previously used to prove completeness of several logics for know-how modality  Naumov and Tao (2017a, b, 2018d, 2018c, 2018b).

We start the proof of the completeness by defining the canonical game