# Intelligence in Strategic Games

The article considers strategies of coalitions that are based on intelligence information about moves of some of the other agents. The main technical result is a sound and complete logical system that describes the interplay between coalition power modality with intelligence and distributed knowledge modality in games with imperfect information.

## Authors

• 18 publications
• 33 publications
11/08/2019

### Duty to Warn in Strategic Games

The paper investigates the second-order blameworthiness or duty to warn ...
11/05/2018

### Knowledge and Blameworthiness

Blameworthiness of an agent or a coalition of agents is often defined in...
11/05/2018

### Blameworthiness in Games with Imperfect Information

Blameworthiness of an agent or a coalition of agents is often defined in...
12/22/2017

### A Compositional Coalgebraic Semantics of Strategic Games

We provide a compositional coalgebraic semantics for strategic games. In...
03/24/2018

### An Introduction to Imperfect Competition via Bilateral Oligopoly

The aim of this paper is threefold. First, we provide a unified framewor...
01/05/2022

### The E-Intelligence System

Electronic Intelligence (ELINT), often known as E-Intelligence, is intel...
10/10/2019

### Strategic Coalitions in Stochastic Games

The article introduces a notion of a stochastic game with failure states...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

The Battle of the Atlantic was a classical example of the matching pennies game. British (and American) admirals were choosing routes of the allied convoys and Germans picked routes of their U-boats. If their trajectories crossed, the Germans scored a win, if not the allies did. Neither of the players appeared to have a strategy that would guarantee victory.

The truth, however, was that during most of the battle one of the sides had exactly such a strategy. First, it was British who broke German Enigma cipher in summer 1941. Although Germans did not know about British success, they changed codebook and added fourth wheel to Enigma in February 1942 thus preventing British from decoding German messages. The very next month, in March 1942, German navy cryptography unit, B-Dienst, broke allied code and got access to convoy route information. Germans lost their ability to read allied communication in December 1942 due to a routine change in the allied codebook. The same month, British were able to read German communication as a result of capturing codebook from a U-boat in Mediterranean. In March 1943, Germans changed codebook again and, unknowingly, disabled British ability to read German messages. Simultaneous, Germans caught up and started to decipher British transmissions again Budiansky (2002); Showell (2003).

At almost any moment during these two years one of the sides was able to read the communications of the other side. However, neither of them was able to figure out that their own code is insecure because the two sides never have been able to read each other messages at the same time to notice that the other side knows more than it should have known. Finally, in May 1943, with help of US Navy, British cracked German messages while Germans still were reading British. It was the first time allies understood that their code was insecure. A new convoy cipher was immediately introduced and Germans have never been able to break it again, while allies continued reading Enigma-encrypted transmissions till the end of the war

Budiansky (2002).

In this article we study coalition power in strategic games assuming that the coalition has intelligence information about moves of all or some of its opponents. We write if coalition has a strategy to achieve outcome as long as the coalition knows what will be the move of each agent in set . For example,

 [British]Germans(Convoy is saved).

Modality is the coalitional power modality proposed by Marc Pauly Pauly (2001, 2002). He gave a sound and complete axiomatization of this modality in the case of perfect information strategic games. Various extensions of his logic has been studied before Goranko (2001); van der Hoek and Wooldridge (2005); Borgo (2007); Sauro et al. (2006); Ågotnes et al. (2010, 2009); Belardinelli (2014); Goranko et al. (2013); Goranko and Enqvist (2018). Strategic power modality with intelligence can be expressed in Strategy Logic Chatterjee et al. (2010); Mogavero et al. (2014) as

 ∀t1…∀tk∃s1,…∃sn(a1,s1)…(an,sn)(i1,t1)…(ik,tk)Xφ.

The literature on the strategy logic covers model checking Berthon et al. (2017), synthesis Čermák et al. (2015), decidability Mogavero et al. (2012); Vardi et al. (2017), and bisimulation Belardinelli et al. (2018). We are not aware of any completeness results for a strategy logic with quantifiers over strategies. At the same time, our approach is different from the one in Alternating-time Temporal Logic with Explicit Strategies (ATLES) Walther et al. (2007). There, modality denotes existence existence of a strategy of coalition for a fixed commitment of some of the other agents. Unlike ATLES, out modality denotes existence of a strategy of coalition of coalition for any commitment of coalition , as long as it is known to . Goranko and Ju proposed several versions of strategic power with intelligence modality, gave formal semantics of these modalities, and discussed a matching notion of bisimulation Goranko and Ju (2019). They do not suggest any axioms for these modalities.

An important example of intelligence in strategic games comes from Stackelberg security games. These are two-player games between a defender and an intruder. The defender is using a mixed

strategy to assign available resources to targets and the intruder is using a pure strategy to attack one of the targets. The distinctive property of the security games is the assumption that the intruder knows the probabilities with which the defender assigns resources to different targets. The intruder uses this information to plan the attack that is likely to bring the most damage. In other words, it is assumed that the intruder has the intelligence about the mixed strategy deployed by the defender. Security games have been used by the U.S. Transportation Security Administration, the U.S. Federal Air Marshal Service, the U.S. Coast Guard, and others

Sinha et al. (2018).

Recently, logics of coalition power were generalized to imperfect information games. Unlike prefect information strategic games, the outcome of an imperfect information game might depend on the initial state of the game that could be unknown to the players. For example, consider a hypothetical setting in which an allied convoy and a German U-boat have to choose between three routes from point A to point B: route 1, route 2, or route 3, see Figure 1. Let us furthermore assume that it is known to both sides that one of these routes is blocked by Russian naval mines. Although the mines are located along route 1, neither allies nor Germans known this. If allies have an access to intelligence about German U-boats, then, in theory, they have a strategy to save the convoy. For example, if Germans will use route 2, then allies can use route 3. However, since allies do not know the location of Russian mines, even after they receive information about German plans, they still would not know how to save the convoy.

It has been suggested in several recent works that, in the the case of the games with imperfect information, strategic power modality in Marc Pauly logic should be restricted to existence of know-how 111Know-how strategies were studied before under different names. While Jamroga and Ågotnes talked about “knowledge to identify and execute a strategy” Jamroga and Ågotnes (2007), Jamroga and van der Hoek discussed “difference between an agent knowing that he has a suitable strategy and knowing the strategy itself” Jamroga and van der Hoek (2004). Van Benthem called such strategies “uniform” van Benthem (2001). Wang gave a complete axiomatization of “knowing how” as a binary modality Wang (2015, 2016), but his logical system does not include the knowledge modality. strategies Ågotnes and Alechina (2016); Naumov and Tao (2017a); Fervari et al. (2017); Naumov and Tao (2018b, c, a). That is, modality should stands for “coalition has a strategy, it knows that it has a strategy, and it knows what the strategy is”. In this article we adopt this approach to strategic power with intelligence. For example, in the imperfect information setting depicted in Figure 1, after receiving the intelligence report, the British have a strategy, they know that they have a strategy, but they do not know what the strategy is:

 ¬[British]Germans(Convoy is saved).

At the same time, since Russians presumably know the location of their mines,

 [British, Russians]\footnotesizeGermans(Convoy is saved% ).

The main contribution of this article is a complete logical system that describes the interplay between the coalition power with intelligence modality and the distributed knowledge modality in an imperfect information setting. The most interesting axiom of our system is a generalized version of Marc Pauly’s Pauly (2001, 2002) Cooperation axiom that connects intelligence and coalition parameters of the modality . Our proof of the completeness is significantly different from the existing proofs of completeness for games with imperfect information Ågotnes and Alechina (2016); Naumov and Tao (2017a, 2018b, 2018c, 2018a). We highlight these differences in the beginning of the Section 6.

## 2 Outline

The rest of the article is organized as follows. In the next section we introduce the syntax and the formal semantics of our logical system. In Section 4, we list the axioms and the inference rules of the system, compare them to related axioms in the previous works and give two examples of formal proofs in our system. In the Section 5 and Section 6 we prove the soundness and the completeness of our logical system respectively. Section 7 concludes.

## 3 Syntax and Semantics

In this section we define the syntax and the semantics of our formal system. Throughout the article we assume a fixed set of propositional variables and a fixed set of agents . By a coalition we mean any finite subset of . Finiteness of coalitions will be important for the proof of the completeness.

###### Definition 1

Let be the minimal set of formulae such that

1. for each propositional variable ,

2. for all ,

3. for each formula and each coalition ,

4. for each formula and all disjoint coalitions .

In other words, the language of our logical system is defined by grammar:

 φ:=p|¬φ|φ→φ|KCφ|[C]Bφ.

Formula stands for “coalition distributively knows ” and formula for “coalition distributively knows strategy to achieve as long as it gets an intelligence on actions of coalition ”.

For any sets and , by we mean the set of all functions from to .

###### Definition 2

A tuple is called a game if

1. is a set of states,

2. is an “indistinguishability” equivalence relation on set for each agent ,

3. is a nonempty set, called the “domain of actions”,

4. relation is an “aggregation mechanism”,

5. function maps propositional variables to subsets of .

A function from set is called a complete action profile.

Figure 2 depicts a diagram of the Battle of the Atlantic game with imperfect information, as described in the introduction. For the sake of simplicity, we treat British, Germans, and Russians as single agents, not groups of agents. The game has five states: 1, 2, 3, , and . States 1, 2, and 3 are three “initial” states that correspond to possible locations of Russian mines along route 1, route 2, or route 3. Neither British nor Germans can distinguish these states, which is shown in the diagram by labels on dashed lines connecting these three states. Russians know location of the mines and, thus, can distinguish these states. The other two states are “final” states and that describe if the convoy made it safe () or was destroyed () by either a U-boat or a mine. The designation of some states as “initial” and others as “final” is specific to the Battle of the Atlantic game. In general, our Definition 2 does not distinguish between such states and we allow games to take multiple consecutive transitions from one state to another.

The domain of actions in this game is . For British and Germans actions represent the choice of routes that they make for their convoys and U-boats respectively. Russians are passive players in this game. Their action does not affect the outcome of the game. Technically, a complete action profile is a function from set into set . Since, there are only three players in the Battle of the Atlantic game, it is more convenient to represent function by triple , where is the action of British, is the action of Germans, and is the action of Russians.

The mechanism of the Battle of the Atlantic game is captured by the directed edges in Figure 2 labeled by complete actions profiles. Since value in a profile does not effect the outcome, it is omitted on the diagram. For example, directed edge from state to state is labeled with 23 and 32. This means that the mechanism contains triples , , , , , and .

The definition of a game that we use here is more general than the one used in the original Marc Pauly’s semantics of the logic of coalition power. Namely, we assume that the mechanism is a relation, not a function. On one hand, this allows us to talk about nondeterministic games where for each initial state and each complete action profile there might be more than one outcome. On the other hand, this also allows for some combinations of the initial state and the complete action profile there to be no outcome at all. In other words, we do not exclude games in which agents might have an ability in some situations to terminate the game without reaching an outcome. If needed, such games can be excluded and an additional axiom be added to the logical system. The proof of the completeness will remain mostly unchanged. We also introduce indistinguishability relation on states to capture the imperfect information. We do it in the same way as it has been done in the cited earlier previous works on the logics of coalition power with imperfect information.

###### Definition 3

For any states and any coalition , let if for each agent .

In particular, for any two states of the game.

###### Lemma 1

For any coalition , relation is an equivalence relation on set .∎

By an action profile of a coalition we mean any function from set . For any two functions , we write if for each .

Next is the key definition of this article. Its part 5 gives the semantics of modality . This part uses state to capture the fact that the strategy succeeds in each state indistinguishable by coalition from the current state . In other words, the coalition knows that this strategy will succeed. Except for the addition of coalition and its action profile , this is essentially the same definition as the one used in Ågotnes and Alechina (2016); Naumov and Tao (2017a); Fervari et al. (2017); Naumov and Tao (2017b, 2018c, 2018b, 2018a).

###### Definition 4

For any game , any state , and any formula , let satisfiability relation be defined as follows:

1. if , where is a propositional variable,

2. if ,

3. if or ,

4. if for each such that ,

5. if for any action profile of coalition there is an action profile of coalition such that for any complete action profile and any states if , , , and , then .

For example, for the game depicted in Figure 2,

 1⊩[British, Russians]\footnotesizeGermans(Convoy % is saved).

Indeed, statement is true only for one state , namely state itself. Then, for any action profile of the single-member coalition we can define action profile as, for example,

 γ(a)=⎧⎨⎩3, if a=British and β(% Germans)=2,2, if a=British and β(Germans)=3,1, if a=Russians.

In other words, if profile assigns Germans route 2, then profile assigns British route 3 and vice versa. Assignment of an action to Russians in not important. This way, no matter what Germans’ action is, the British convoy will avoid both the German U-boat and the Russian mines in the game that starts from state . At the same time,

 1⊩¬[British]\footnotesizeGermans(Convoy is % saved).

because without Russians the British cannot distinguish states 1, 2, and 3. In other words, for any state . Thus, for each action profile we need to have a single action profile that would bring the convoy to state from any of the states 1, 2, and 3. Such profile does not exists because, even if the British know where Germans U-boat will be, there is no single uniform strategy to choose path that would avoid Russian mines from all three indistinguishable states 1, 2, and 3.

## 4 Axioms

In addition to the propositional tautologies in language , our logical system consists of the following axioms:

1. Truth: ,

2. Distributivity: ,

3. Negative Introspection: ,

4. Epistemic Monotonicity: , where ,

5. Strategic Introspection: ,

6. Empty Coalition: ,

7. Cooperation: , where sets , and are pairwise disjoint,

8. Intelligence Monotonicity: , where ,

9. None to Analyze: .

Note that in the Cooperation axiom above and often throughout the rest of the article we abbreviate as . However, we keep writing when notation could be confusing.

The Truth, the Distributivity, the Negative Introspection, and the Epistemic Monotonicity axioms are the standard axioms of the epistemic logic of distributed knowledge Fagin et al. (1995). The Strategic Introspection axiom states that if a coalition has a “know-how” strategy, then it knows that it has such a strategy. A version of this axiom without intelligence was first introduced in Ågotnes and Alechina (2016). The Empty Coalition axiom says that if statement is satisfied in each state of the model, then the empty coalition has a strategy to achieve it. This axiom first appeared in Naumov and Tao (2017b). The Cooperation axiom for strategies without intelligence:

 [C](φ→ψ)→([D]φ→[C,D]ψ),

where sets and are disjoint, was introduced in Pauly (2001, 2002). This is the signature axiom that appears in all subsequent works on logics of coalition power. The version of this axiom with intelligence is one of the key contributions of the current article. Our version states that if coalition knows how to achieve assuming it has intelligence about actions of coalition and coalition knows how to achieve assuming it has intelligence about actions of coalitions and , then coalitions and know how together they can achieve if they have intelligence about actions of coalition . We prove soundness of this axiom in Section 5. The remaining two axioms are original to this article. The Intelligence Monotonicity axiom states if coalition has a strategy based on intelligence about actions of coalition , then coalition has such strategy based on intelligence about any larger coalition. The other form of monotonicity for modality , monotonicity on coalition is also true. It is not listed among our axioms because it is provable in our system, see Lemma 2. The None to Analyze axiom say that if there is none to interpret the intelligence information about coalition , then this intelligence might as well not exist.

We write if formula is provable from the above axioms using the Modus Ponens, the Epistemic Necessitation, and the Strategic Necessitation inference rules:

 φ,φ→ψψ,φKCφ,φ[C]Bφ.

We write if formula is provable from the theorems of our logical system and an additional set of axioms using only the Modus Ponens inference rule. Note that if set is empty, then statement is equivalent to . We say that set is consistent if .

The next lemma gives an example of a formal proof in our logical system. This example will be used later in the proof of the completeness.

###### Lemma 2

, where .

Proof. Formula is a tautology. Thus, by the Strategic Necessitation inference rule. Note that the following formula: is an instance of the Cooperation axiom. Thus, by the Modus Ponens inference rule. Note also that because of the assumption . Therefore, .

The following lemma states the well-known Positive Introspection principle for the distributed knowledge.

###### Lemma 3

.

Proof. Formula is an instance of the Truth axiom. Thus, by contraposition. Hence, taking into account the following instance of the Negative Introspection axiom: , we have

 ⊢KCφ→KC¬KC¬KCφ. (1)

At the same time, is an instance of the Negative Introspection axiom. Thus, by the law of contrapositive in the propositional logic. Hence, by the Necessitation inference rule, . Thus, by the Distributivity axiom and the Modus Ponens inference rule, The latter, together with statement (1), implies the statement of the lemma by propositional reasoning.

We conclude this section by stating the two standard lemmas about our deduction system. These lemmas will be used later in the proof of the completeness.

###### Lemma 4 (deduction)

If , then .

Proof. Since refers to the provability without the use of the Epistemic Necessitation and the Strategic Necessitation inference rules, the standard proof of the deduction lemma for propositional logic (Mendelson, 2009, Proposition 1.9) applies to our system as well.

###### Lemma 5 (Lindenbaum)

Any consistent set of formulae can be extended to a maximal consistent set of formulae.

Proof. The standard proof of Lindenbaum’s lemma (Mendelson, 2009, Proposition 2.14) applies here too.

## 5 Soundness

In this section we prove the soundness of the axioms of our logical system with respect to the semantics given in Section 3.

###### Theorem 1 (soundness)

If , then for each state of each game.

As usual, the soundness of the Truth, the Distributivity, the Negative Introspection, and the Monotonicity axiom follows from the assumption that is an equivalence relation Fagin et al. (1995). Below we prove the soundness of each of the remaining axioms as a separate lemma.

###### Lemma 6

If , then .

Proof. Consider any state such that . By Definition 4, it suffices to show that . Indeed, consider any action profile of coalition . By the same Definition 4, it suffices to show that there is an action profile of coalition such that for any complete action profile and any states if , , , and , then .

Since , by Lemma 1, it suffices to show that there is an action profile of coalition such that for any complete action profile and any states if , , , and , then . The last statement is true by Definition 4 and the assumption .

###### Lemma 7

If , then .

Proof. Let be an action profile of an empty coalition222Such action profile is unique, but this is not important for our proof.. By Definition 4, it suffices to show that there is an action profile of the empty coalition such that for any complete action profile and all states if , , , and , then . Indeed, let . Thus, it suffices to prove that for each state . The last statement follows from the assumption by Definition 4.

###### Lemma 8

If , , and sets , , and are pairwise disjoint, then .

Proof. Consider any action profile of coalition . By Definition 4, it suffices to show that there is an action profile of coalition such that for any complete action profile and any states if , , , and , then .

Assumption , by Definition 4, implies that there is an action profile of coalition such that for any complete action profile and any states if , , , and , then .

Define action profile of coalition as follows:

 β1(a)={β(a),if a∈B,γ1(a),if a∈C.

Action profile is well-defined because sets and are disjoint.

Assumption , by Definition 4, implies that there is an action profile of coalition such that for any complete action profile and any states if , , , and , then .

Define action profile of coalition as follows:

 γ(a)={γ1(a),if a∈C,γ2(a),if a∈D.

Action profile is well-defined because sets and are disjoint.

Consider any complete action profile and any states such that , , , and . Recall from the first paragraph of this proof that it suffices to show that . Note that , , , and . Thus, by the choice of the action profile . Similarly, and . Hence, . Also, , , and . Thus, by the choice of the action profile . Therefore, by Definition 4 because and .

###### Lemma 9

If , , and sets and are disjoint, then .

Proof. Consider any action profile of coalition . By Definition 4, it suffices to show that there is an action profile of coalition such that for any complete action profile and any states if , , , and , then .

Define action profile of coalition to be such that for each agent . Action profile is well-defined due to the assumption of the lemma. By Definition 4, assumption implies that there is an action profile of coalition such that for any complete action profile and any states if , , , and , then . Note that by the choice of action profile . Therefore, there is an action profile of coalition such that for any complete action profile and any states if , , , and , then .

###### Lemma 10

If , then .

Proof. By Definition 4, assumption implies that for any action profile of coalition there is an action profile of the empty coalition such that for any complete action profile and any states if , , , and , then .

Thus, for any action profile of coalition , any complete action profile and any states if and , then . Hence, for any complete action profile and any states if , then .

Then, for any action profile of the empty coalition there is an action profile of the empty coalition such that for any complete action profile and any states if , , , and , then .

Therefore, by Definition 4.

## 6 Completeness

In this section we prove the completeness of our logical system. We start this proof by fixing a maximal consistent set of formulae and defining the canonical game .

There are two major challenges that we need to overcome while defining the canonical model. The first of them is well-known complication related to the presence of the distributed knowledge modality in our logical system. The second is a unique challenge of specific to strategies with intelligence. To understand the first challenge, recall that in case of individual knowledge states are usually defined as maximal consistent sets. Two such sets are -equivalent if the sets contain the same formulae. Unfortunately, this construction can not be easily adapted to distributed knowledge because if two sets share and formulae, then they not necessarily share formulae. To overcome this challenge we use “tree” construction in which each state is a node of a labeled tree. Nodes of the tree are labeled with maximal consistent sets and edges are labeled with coalitions. This construction has been used in logics of know-how with distributed knowledge before Naumov and Tao (2017a, b, 2018c, 2018b, 2018a).

To understand the second challenge, let us first recall the way the canonical game is usually constructed for the logics of the coalition power. The commonly used construction defines the domain of actions to be the set of all formulae. Informally, it means that each agent “votes” for a formula that the agents wants to be true in the next state. Of course, not requests of agents are granted. The canonical game mechanism specifies which requests are granted and which are ignored. There also are canonical game constructions in which a voting ballot in addition to a formula must also contain some additional information that acts as a “key” verifying that the voting agents has certain information Naumov and Tao (2018c, a). So, it is natural to assume that in case of formula , coalition should vote for formula and provide vote of coalition as a key. This approach, however, turns out to be problematic. Indeed, in order to satisfy some other formula, say , vote of coalition would need to include vote of coalition as a key. Thus, it appears, that vote of would need to include vote of as well. The situation is further complicated by mutual recursion when one attempts to satisfy formulae and simultaneously. The solution that we propose in this article avoids this recursion. It turns out that it is not necessary for the key to contain the complete intelligence information. Namely, we assume that each agent votes for a formula and signs her vote with a random integer key. To satisfy formula , the mechanism will guarantee that if all members of coalition vote for and sign with integer keys that are larger than keys of all members in coalition , then will be true in the next state. This idea is formalized later in Definition 7.

###### Definition 5

Sequence is a state of the canonical game if

1. ,

2. are maximal consistent sets of formulae,

3. are coalitions of agents,

4. for each integer such that .

We say that sequence and sequence are adjacent. The adjacency relation defines an undirected labeled graph whose nodes are elements of set and whose edges are specified by the adjacency relation. The node by set and edge by each agent in set , see Figure 3. Note that this graph has no cycles and thus is a tree.

For any agent and any nodes , we say that if all edges along the unique simple path connecting nodes and are labeled by agent .

Throughout the rest of the article, for any nonempty sequence , by we denote the element and by we denote sequence .

The next lemma shows that the tree construction solves the challenge of the distributed knowledge discussed in the preamble to this section.

###### Lemma 11

If , then iff .

Proof. Assumption implies that each edge along the unique simple path between nodes and is labeled with all agents in coalition . Thus, it suffices to show that iff for any two adjacent nodes along this path. Indeed, without loss of generality, let

 w = X0,C1,X1,…,Cn−1,Xn−1 w′ = X0,C1,X1,…,Cn−1,Xn−1,Cn,Xn.

The assumption that the edge between and is labeled with all agents in coalition implies that . Next, we show that iff .

Suppose that . Thus, by Lemma 3. Hence, by the Epistemic Monotonicity axiom and because . Hence, because set is maximal. Then, by Definition 5.

Suppose that . Thus, because set is maximal. Hence, by the Negative Introspection axiom. Hence, by the Epistemic Monotonicity axiom and because . Hence, because set is maximal. Then, by Definition 5. Therefore, because set is consistent.

This defines the states of the canonical game and the indistinguishability relations on these states. We now will define the domain of actions and the mechanism of the canonical game.

###### Definition 6

is a set of all pairs such that is a formula, and is an integer number.

If is a pair , then by and we mean elements and respectively.

###### Definition 7

Mechanism is the set of triples such that for any formula if

1. , for each ,

2. for each and each ,

then .

Figure 4 describes a Battle of the Atlantic inspired example that illustrates the definition of the canonical mechanism.

Here set contains formulae and . Thus, the mechanism enables both British and Germans to achieve their goal as long as their have intelligence about the move of the other party. British, Germans, and Russians have chosen actions , , and respectively. Thus, . Then, according to Definition 7, statement “saved” (short for ”Convoy is saved”) will belong to set , where is the outcome state of the game. Note that although , the Russian action did not save the convoy because statement does not belong to set .

###### Definition 8

.

This concludes the definition of the canonical game . The next important milestone in the proof of the completeness is what sometimes is called “truth” lemma that connects syntax and semantics sides in the canonical game construction. In our case, this is Lemma 14. Before that lemma, however, we state and prove two auxiliary statements that will be used in the induction step of the proof of Lemma 14.

###### Lemma 12

If , then there is a state such that and .

Proof. Consider set of formulae

 X={¬φ}∪{ψ|KCψ∈w}.

First, we prove that set is consistent. Suppose the opposite. Thus, there are formulae such that

 ψ1,…,ψn⊢φ.

Hence, by applying times Lemma 4,

 ⊢ψ1→(ψ2→…(ψn→φ)…).

Then, by the Epistemic Necessitation inference rule,

 ⊢KC(ψ1→(ψ2→…(ψn→φ)…)).

Thus, by the Distributivity axiom and the Modus Ponens inference rule,

 ⊢KCψ1→KC(ψ2→…(ψn→φ)…)).

Recall that by the choice of formula . Hence, by the Modus Ponens inference rule,

 hd(w)⊢KC(ψ2→…(ψn→φ)…)).

By repeating the previous step more times,

 hd(w)⊢KCφ.

Hence, due to the consistency of the set . This contradicts the assumption of the lemma. Therefore, set is consistent. By Lemma 5, there is a maximal consistent extension of the set . Let be sequence . Note that by Definition 5 and the choice of set , set and sequence . Furthermore, by the definition of relation on the set . Finally, again by the choice of set , set and sequence .

###### Lemma 13

If , then there exists an action profile of coalition such that for each action profile of coalition there is a complete action profile and states such that , , , , and .

Proof. Let action profile of coalition be such that for each agent . Consider any action profile of coalition . Choose an integer such that for each

 pr2(γ(a))

Such exists because coalition is a finite set of agents. Define complete action profile as follows

 δ(a)=⎧⎪⎨⎪⎩β(a),if a∈B,γ(a),if a∈C,(⊤,z0),otherwise. (3)

Note that by the assumption of the lemma. Thus, sets and are disjoint by Definition 1. Thus, complete action profile is well-defined.

Consider set such that

 X = {¬φ}∪{σ|[∅]∅σ∈hd(w)}∪ {ψ|[P]Qψ∈hd(w),P≠∅,∀p∈P(pr1(δ(p))=ψ), ∀q∈Q∀p∈P(pr2(δ(q))

Next we show that set is consistent. Suppose the opposite. Thus,

 σ1,…,σm,ψ1,ψ2,ψ3,…,ψn⊢φ (5)

for some formulae

 [∅]∅σ1,…,[∅]∅σm∈hd(w) (6)

and some formulae

 [P1]Q1ψ1,…,[Pn]Qnψn∈hd(w) (7)

such that

 pr2(δ(q))
 Pi≠∅, (9)

and

 pr1(δ(p))=ψi (10)

for each , each , and each . Without loss of generality, we can assume that formulae are distinct and none of them is equal to :

 ψi≠ψj, (11)
 ψi≠⊤ (12)

for each and each .

###### Claim 0

Sets are pairwise disjoint.

Proof. Consider any agent . Then, by equation (10), which contradicts assumption (11).

###### Claim 0

for each .

Proof. Consider any agent . Suppose that . Then, by equation (3). At the same time, by equality (10) because . Hence, , which contradicts assumption (12).

###### Claim 0

.

Proof. Consider any agent . Statement (9) implies that there is at least one agent . Then, by Claim 2. Thus, due to equality (3) and inequality (2). Hence, by inequality (8). Therefore, due to equality (3).

For any nonempty finite set of agents let

 rank(P)=minp∈Ppr2(δ(p)). (13)

Sets are nonempty by statement (9). Thus, is defined for each . Without loss of generality, we can assume that, see Figure 5,

 rank(P1)≤rank(P2)≤⋯≤rank(Pn). (14)
###### Claim 0

Sets and are disjoint for .

Proof. For any