A First-Order Logic for Reasoning about Knowledge and Probability

by   Siniša Tomović, et al.

We present a first-order probabilistic epistemic logic, which allows combining operators of knowledge and probability within a group of possibly infinitely many agents. The proposed framework is the first order extension of the logic of Fagin and Halpern from (J.ACM 41:340-367,1994). We define its syntax and semantics, and prove the strong completeness property of the corresponding axiomatic system.



There are no comments yet.


page 1

page 2

page 3

page 4


Reasoning about Knowledge and Strategies: Epistemic Strategy Logic

In this paper we introduce Epistemic Strategy Logic (ESL), an extension ...

Revisiting Epistemic Logic with Names

This paper revisits the multi-agent epistemic logic presented in [10], w...

Named Models in Coalgebraic Hybrid Logic

Hybrid logic extends modal logic with support for reasoning about indivi...

Stit Semantics for Epistemic Notions Based on Information Disclosure in Interactive Settings

We characterize four types of agentive knowledge using a stit semantics ...

A Logic for Conditional Local Strategic Reasoning

We consider systems of rational agents who act and interact in pursuit o...

Resource-driven Substructural Defeasible Logic

Linear Logic and Defeasible Logic have been adopted to formalise differe...

Knowledge on Treelike Spaces

This paper presents a bimodal logic for reasoning about knowledge during...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Reasoning about knowledge is widely used in many applied fields such as computer science, artificial intelligence, economics, game theory etc

[2, 11, 3, 17]. A particular line of research concerns the formalization in terms of multi-agent epistemic logics, that speak about knowledge about facts, but also about knowledge of other agents. One of the central notions is that of common knowledge, which has been shown as crucial for a variety of applications dealing with reaching agreements or coordinated actions [14]. Intuitively, is common knowledge of a group of agents exactly when everyone knows that everyone knows that everyone knows…that is true.

However, it has been shown that in many practical systems common knowledge cannot be attained [14, 10]. This motivated some researchers to consider a weaker variant that still may be sufficient for carrying out a number of coordinated actions [30, 17, 24]. One of the approaches proposes a probabilistic variant of common knowledge [23], which assumes that coordinated actions hold with high probability. A propositional logical system which formalizes that notion is presented in [8], where Fagin and Halpern developed a joint framework for reasoning about knowledge and probability and proposed a complete axiomatization.

We use the paper [8] as a starting point and generalize it in two ways:

First, we extend the propositional formalization from [8] by allowing reasoning about knowledge and probability of events expressible in a first-order language. We use the most general approach, allowing arbitrary combination of standard epistemic operators, probability operators, first-order quantifiers and, in addition, of probabilistic common knowledge operator. The need for first-order extension is recognized by epistemic and probability logic communities. Wolter [36] pointed out that first-order common knowledge logics are of interest both from the point of view of applications and of pure logic. He argued that first-order base is necessary whenever the application domains are infinite (like in epistemic analysis of the playability of games with mixed strategies), or finite, but with the cardinality of which is not known in advance, which is a frequent case in in the field of Knowledge Representation. Bacchus [4] gave the similar argument in the context of probability logics, arguing that, while a domain may be finite, it is questionable if there is a fixed upper bound on its size, and he also pointed out that there are many domains, interesting for AI applications, that are not finite.

Second, we consider infinite number of agents. While this assumption is not of interest in probability logic, it was studied in epistemic logic. Halpern and Shore [16] pointed out that economies, when regarded as teams in a game, are often modeled as having infinitely many agents and that such modeling in epistemic logic is also convenient in the situations where the group of agents and its upper limit are not known apriori.

The semantics for our logic consists of Kripke models enriched with probability spaces. Each possible world contains a first order structure, each agent in each world is equipped with a set of accessible worlds and a finitely additive probability on measurable sets of worlds. In this paper we consider the most general semantics, with independent modalities for knowledge and probability. Nevertheless, in Section 5.2 we show how to modify the definitions and results of our logic, in order to capture some interesting relationships between the modalities for knowledge and probability (previously considered in [8]), especially the semantics in which agents assign probabilities only to the sets of worlds they consider possible.

The main result of this paper is a sound and strongly complete (“every consistent set of sentences is satisfiable”) axiomatization. The negative result of Wolter [36] shows that there is no finite way to axiomatize first order common knowledge logics, and that infinitary axiomatizations are the best we can do (see Section 2.3). We obtain completeness using infinitary rules of inference. Thus, formulas are finite, while only proofs are allowed to be (countably) infinite. We use a Henkin-style construction of saturated extensions of consistent theories. From the technical point of view, we modify some of our earlier developed methods presented in [7, 22, 27, 28].222For the detailed overview of the approach, we refer the reader to [29]. A similar approach is later used in [37]. Although we use an alternative axiomatization for the epistemic part of logic (i.e., different from original axiomatization given in [8, 15]), we prove that standard axioms are derivable in our system.

There are several papers on completeness of epistemic logics with common knowledge.

In propositional case, a finitary axiomatization, which is weakly complete (“every consistent formula is satisfiable”), is obtained by Halpern and Moses [15] using a fixed-point axiom for common knowledge. On the other hand, strong completeness for any finitary axiomatization is impossible, due to lack of compactness (see Section 2.3). Strongly complete axiomatic systems are proposed in [32, 5]. They contain an infinitary inference rule, similar to one of our rules333It is easy to check that our inference rule RC from Section 3 generalize the rule from [32, 5], due to presence of probability operators., for capturing semantic relationship between the operators of group knowledge and common knowledge.

In first-order case, the set of valid formulas is not recursively enumerable [36] and, consequently, there is no complete finitary axiomatization. One way to overcome this problem is by including infinite formulas in the language as in [33]. A logic with finite formulas, but an infinitary inference rule, is proposed in [21], while a Genzen-style axiomatization with an inifinitary rule is presented in [32]. On the other hand, a finitary axiomatization of monadic fragments of the logic, without function symbols and equality, is proposed in [31].

Fagin and Halpern [8] proposed a joint frame for reasoning about knowledge and probability. Following the approach from [9], they extended the propositional epistemic language with formulas which express linear combinations of probabilities, i.e., the formulas of the form , where , . They proposed a finitary axiomatization and proved weak completeness, using the small model theorem. Our axiomatization technique is different. Since in the first order case a complete finitary axiomatization is not possible, we use infinitary rules and we prove strong completeness using Henkin-style method. We use unary probability operators and we axiomatize the probabilistic part of our logic following the techniques from [29]. In particular, our logic incorporates the single-agent probability logic from [28]. However, our approach can be easily extended to include linear combinations of probabilities, similarly as it was done in [6, 26].

We point out that all the above mentioned logics do not support infinite group of agents, so the group knowledge operator is defined as the conjunction of knowledge of individual agents. A weakly complete axiomatization for common knowledge with infinite number of agents (in non-probabilistic setting) is presented in [16]. In our approach, the knowledge operators of groups and individual agents are related via an infinitary rule (RE from Section 3).

The rest of the paper is organized as follows: In Section 2 we introduce Syntax and Semantics. Section 3 provides the axiomatization of our logic system, followed by the proofs of its soundness. In Section 4 we prove several theorems, including Deduction theorem and Strong necessitation. The completeness result is proven in Section 5. Section 6 we consider an extension of our logic by incorporating consistency condition [8]. The concluding remarks are given in Section 7.

2. Syntax and sematics

In this section we present the syntax and semantics of our logic, that we call .444 stands for “probabilistic common knowledge”, while indicates that our logic is a first-order logic. Since the main goal of this paper is to combine the epistemic first order logic with reasoning about probability, our language extends a first order language with both epistemic operators, and the operators for reasoning about probability and probabilistic knowledge. We introduce the set of formulas based on this language and the corresponding possible world semantics, and we define the satisfiability relation.

2.1. Syntax

Let be the set of rational numbers from the real interval , the set of non-negative integers, an at most countable set of agents, and a countable set of nonempty subsets of .

The language of the logic contains:

  • a countable set of variables ,

  • -ary relation symbols and function symbols for every integer ,

  • Boolean connectives and , and the first-order quantifier ,

  • unary modal knowledge operators , for every and ,

  • unary probability operator and the operators for probabilistic knowledge and , where , , .

By the standard convention, constants are ary function symbols. Terms and atomic formulas are defined in the same way as in the classical first-order logic.

Definition 2.1 (Formula).

The set of formulas is the least set containing all atomic formulas such that: if then , , , , , , , , for every , and .

We use the standard abbreviations to introduce other Boolean connectives , and , the quantifier and the symbols . We also introduce the operator (for and ) in the following way: the formula abbreviates .

The meanings of the operators of our logic are as follows.

  • is read as “agent i knows and as “everyone in the group knows . The formula is read is common knowledge among the agents in , which means that everyone (from ) knows , everyone knows that everyone knows , etc.

    Example. The sentence “everyone in the group knows that if agent doesn’t know , then is common knowledge in , is written as

  • The probabilistic formula says that the probability that formula holds is at least according to the agent .

  • abbreviates the formula . It means that agent knows that the probability of is at least .

    Example. Suppose that agent considers two only possible scenarios for an event , and that each of these scenarios puts a different probability space on events. In the first scenario, the probability of is , and in the second one it is . Therefore, the agent knows that probability of is at least , i.e., .

  • denotes that everyone in the group knows that the probability of is at least . Once is introduced, is defined as a straightforward probabilistic generalization of the operator .

  • denotes that it is a common knowledge in the group that the probability of is at least . For a given threshold , represents a generalization of non-probabilistic operator .

    Example. The formula

    says that everyone in the group knows that the probability that both agent knows that holds for some , and that is not common knowledge among the agents in with probability at least , is at least .

Note that the other types of probabilistic operators can also be introduced as abbreviations: is , is , is and is .

Now we define what we mean by a sentence and a theory. The following definition uses the notion free variable, which is defined in the same way as in the classical first-order logic.

Definition 2.2 (Sentence).

A formula with no free variables is called a sentence. The set of all sentences is denoted by . A set of sentences is called theory.

Next we introduce a special kind of formulas in the implicative form, called -nested implications, which will have an important role in our axiomatization.

Definition 2.3 (-nested implication).

Let be a formula and let and . Let be a sequence of formulas, and a sequence of knowledge and probability operators from . The -nested implication formula is defined inductively, as follows:

For example, if , , then

The structure of these -nested implications is shown to be convenient for the proof of Deduction theorem (Theorem 4.1) and Strong necessitation theorem (Theorem 4.2).

2.2. Semantics

The semantic approach for extends the classical possible-worlds model for epistemic logics, with probabilistic spaces.

Definition 2.4 ( model).

A model is a Kripke structure for knowledge and probability which is represented by a tuple


  • is a nonempty set of states (or possible worlds)

  • is a nonempty domain

  • associates an interpretation with each state in such that for all and all :

    • is a function from to ,

    • for each ,

    • is a subset of ,

  • is a set of binary relations on . We denote , and write if .

  • associates to every agent and every state a probability space , such that

    • is a non-empty subset of ,

    • is an algebra of subsets of , whose elements are called measurable sets, and

    • is a finitely-additive probability measure ie.

      • and

      • if .

In the previous definition we assume that the domain is fixed (i.e., the domain is same in all the worlds) and that the terms are rigid, i.e., for every model their meanings are the same in all worlds. Intuitively, the first assumption means that it is common knowledge which objects exist. Note that the second assumption implies that it is common knowledge which object a constant designates. As it is pointed out in [31], the first assumption is natural for all those application domains that deal not with knowledge about the existence of certain objects, but rather with knowledge about facts. Also, the two assumptions allow us to give semantics of probabilistic formulas which is similar to the objectual interpretation for first order modal logics [12].

Note that those standard assumptions for modal logics are essential to ensure validity of all first-order axioms. For example, if the terms are not rigid, the classical first order axiom

where the term is free for in , would not be valid (an example is given in [13]). Similarly, Barcan formula (axiom FO3 in Section 3) holds only for fixed domain models.

For a model be a , the notion of variable valuation is defined in the usual way: a variable valuation is a function which assigns the elements of the domain to the variables, ie., . If is a valuation, then is a valuation identical to , with exception that .

Definition 2.5 (Value of a term).

The value of a term in a state with respect to , denoted by , is defined in the following way:

  • if , then ,

  • if , then .

The next definition will use the following knowledge operators, which we introduce in the inductive way:

  • ,

  • , .

Now we define satisfiability of formulas from in the states of introduced models.

Definition 2.6 (Satisfiability relation).

Satisfiability of formula in a state of a model , under a valuation , denoted by

is defined in the following way:

  • iff

  • iff

  • iff and

  • iff for every ,

  • iff for all

  • iff for all

  • iff for every

  • iff

  • iff for all

  • iff for every


The semantic definition of the probabilistic common knowledge operator from the last item of Definition 2.6 is first proposed by Fagin and Halpern in [8], as a generalization of the operator regarded as the infinite conjunction of all degrees of group knowledge. It is important to mention that this is not the only proposal for generalizing the nonprobabilistic case. Monderer and Samet [23] proposed a more intuitive definition, where probabilistic common knowledge is semantically equivalent to the infinite conjunction of the formulas Although both are legitimate probabilistic generalizations, in this paper we accept the definition of Fagin and Halpern [8], who argued that their proposal seems more adequate for the analysis of problems like probabilistic coordinated attack and Byzantine agreement protocols [17]. As we point out in the Conclusion, our axiomatization approach can be easily modified in order to capture the definition of Monderer and Samet.

If holds for every valuation we write . If for all , we write .

Definition 2.7 (Satisfiability of sentences).

A sentence is satisfiable if there is a state in some model such that . A set of sentences is satisfiable if there exists a state in a model such that for each . A sentence is valid, if is not satisfiable.

Note that in the previous definition the satisfiability of sentences doesn’t depend on a valuation, since they ton’t contain any free variable.

In order to keep the satisfiability relation well-defined, here we consider only the models in which all the sets of the form

are measurable.

Definition 2.8 (Measurable model).

A model is a measurable models if

for every formula , valuation , state and agent . We denote the class of all these models as .

Observe that if is a sentence then the set doesn’t depend on , thus we relax the notation by denoting it by . Also, we write instead of .

2.3. Axiomatization issues

At the end of this section we analyze two common characteristics of epistemic logics and probability logics, which have impacts on their axiomatizations.

The first one is the non-compactness phenomena – there are unsatisfiable sets of formulas such that all their finite subsets are satisfiable. The existence of such sets in epistemic logic is a consequence of the fact that the common knowledge operator can be semantically seen as an infinite conjunction of all the degrees of the group knowledge operator , which leads to the example

In real-valued probability logics, a standard example of unsatisfiable set whose finite subsets are all satisfiable is

where is a satisfiable sentence which is not valid. One significant consequence of non-compactness is that there is no finitary axiomatization which is strongly complete [35], i.e., simple completeness is the most one can achieve.

In the first order case, situation is even worse. Namely, the set of valid formulas is not recursively enumerable, neither for first order logic with common knowledge [36] nor for first order probability logics [1] (moreover, even their monadic fragments suffer from the same drawback [29, 36]). This means that there is no finitary axiomatization which could be (even simply) complete. An approach for overcoming this issue, proposed by Wolter [36], is to consider infinitary logics as the only interesting alternative.

In this paper, we introduce the axiomatization with -rules (inference rules with countably many premises) [28, 5]. This allows us to keep the object language countable, and to move infinity to meta language only: the formulas are finite, while only proofs are allowed to be infinite.

3. The axiomatization

In this section we introduce the axiomatic system for the logic , denoted by . It consists of the following axiom schemata and rules of inference:

I First-order axioms and rules

Prop. All instances of tautologies of the propositional calculus

MP. (Modus Ponens)

FO1. , where is not a free variable un

FO2. , where is the result of substitution of all free occurences of in

by a term which is free for in

FO3. (Barcan formula)


II Axioms and rules for reasoning about knowledge

AK. , (Distribution Axiom)

RK. (Knowledge Necessitation)

AE. ,




III Axioms and rule for reasoning about probabilities


P2. ,



P5. ,

RP. (Probabilistic Necessitation)

RA. , (Archimedean rule)

IV Axioms and rules for reasoning about probabilistic knowledge

APE. ,




The given axioms and rules are divided in four groups, according to the type of reasoning. The first group contains the standard axiomatization for first-order logic and, in addition, a variant of the well-known axiom for modal logics, called Barcan formula. It is proved that Barcan formula holds in the class of all first-order fixed domain modal models, and that it is independent from the other modal axioms [20, 19]. The second group contains axioms and rules for epistemic reasoning. AK and RK are classical Distribution axiom and Necessitation rule for the knowledge operator. The axiom AE and the rule RE are novel; they properly relate the knowledge operators and the operator of group knowledge , regardless of the cardinality of the group . Similarly, AC and RC properly relate the operators and . The infinitary rule RC is a generalization of the rule from [5]. The third group contains multi-agent variant of a standard axiomatization for reasoning about probability [29]. The infinitary rule RA is a variant of so called Archimedean rule, generalized by incorporating the -nested implications, in a similar way as it has been done in [22] in purely probabilistic settings. This rule informally says that if probability of a formula is considered by an agent to be arbitrary close to some number , then, according to the agent , the probability of the fomula must be equal to . The last group consist of novel axioms and rules which allow reasoning about probabilistic knowledge. They properly capture the semantic relationship between the operators , , and , and they are in spirit similar to the last four axioms and rules from the second group.

Note that we use the structure of these -nested implications in all of our infinitary inference rules. As we have already mentioned, the reason is that this form allows us to prove Deduction theorem and Strong necessitation theorem. Note that by choosing , in the inference rules RE, RC, RPE, RPC, we obtain the intuitive forms of the rules:


Next we define some basic notions of proof theory.

Definition 3.1.

A formula is a theorem, denoted by , if there is an at most countable sequence of formulas ( is a finite or countable ordinal555Ie. the length of a proof is an at most countable successor ordinal.) of formulas from , such that , and every is an instance of some axiom schemata or is obtained from the preceding formulas by an inference rule.

A formula is derivable from a set of formulas () if there is an at most countable sequence of formulas ( is a finite or countable ordinal) such that , and each is an instance of some axiom schemata or a formula from the set , or it is obtained from the previous formulas by an inference rule, with the exception that the premises of the inference rules RK and RP must be theorems. The corresponding sequence of formulas is a proof for from .

A set of formulas is deductively closed if it contains all the formulas derivable from , i.e., whenever .

Obviously, a formula is a theorem iff it is derivable from the empty set. Now we introduce the notions of consistency and maximal consistency.

Definition 3.2.

A set of formulas is inconsistent if for every formula , otherwise it is consistent. A set of formulas is maximal consistent if it is consistent, and each proper superset of is inconsistent.

It is easy to see that is inconsistent iff .

In the proof of completeness theorem, we will use a special type of maximal consistent sets, called saturated sets.

Definition 3.3.

A set of formulas is saturated iff it is maximal consistent and the following condition holds:

  • if , then there is a term such that .

Note the notions of deductive closeness, maximal consistency and saturates sets are defined for formulas, but they can be defined for theories (sets of sentences) in the same way. We omit the formal definitions here, since they would have the identical form as the ones above, but we will use the mentioned notions in the following sections.

The following result shows that the proposed axioms from are valid, and the inference rules preserve validity.

Theorem 3.4 (Soundness).

The axiomatic system is sound with respect to the class of models.


The soundness of the propositional part follows directly from the fact that interpretation of and in the definition of relation is the same as in the propositional calculus. The proofs for FO1. and FOR. are standard.

AE. AC. and APC.  follow immediately from the semantics of operators , and .

FO2. Let . Then for every valuation . Note that for every , among all valuations there must be a valuation such that and . From the equivalence iff , we obtain that holds for every valuation. Thus, every instance of FO2 is valid.

FO3. (Barcan formula) Suppose that , ie. for each evaluation , . Then for each valuation and every , . Therefore for every and and every , we have . Thus, for every , and every valuation , . Finally, since for every , , we have .

RC. We will prove by induction on that if , for all , then also , for each state and valuation of any Kripke structure :

Induction base . Let , . Assume that it is not , i.e.,


Then , , and therefore (by the definition of the satisfiability relation), which contradicts (3.0.1).

Inductive step. Let , .

Suppose for some ie. . Assume the opposite, that , ie. . Then also , so for every state we have that , and by the induction hypothesis . Therefore , leading to a contradiction.

On the other hand, let i.e. . Otherwise, if , then , so for every , . This implies there is a subset such that and for all , , : . Then for all by the induction hypothesis, so , which is a contradiction.

RA. We prove the soundness of this rule by induction on , ie. if for every , and , given some model , state and valuation , then .

Induction base . This case follows by the properties of the real numbers.

Inductive step. Let and ie. for every , . Assume the opposite, that . Then , so for every , . Therefore, there exists a subset such that and for all , , : . Then, by the induction hypothesis, we have for all , so , which is a contradiction. The proof follows similarly as for RC if .

RPC. Now we show that rule RPC preserves validity by induction on .

Let us prove the implication: if , for all , and -models , then also , for each state in :

Induction base . Suppose for all . If it is not , i.e. , then , for all . Therefore , which is a contradiction.

Inductive step. Let .

Suppose i.e. . If , i.e. (*), then , for all . So for each we have . By the induction hypothesis on it follows that . But then which contradicts (*).

Let i.e. . Otherwise, if , then , so for every , . Therefore, there is a subset such that and for all , , : . Then for all by the induction hypothesis, so which is a contradiction. ∎

4. Some theorems of

In this section we prove several theorems. Some of them will be useful in proving the completeness of the axiomatization . We start with the deduction theorem. Since we will frequently use this theorem, we will not always explicitly mention it in the proofs.

Theorem 4.1 (Deduction theorem).

If is a theory and are sentences, then implies .


We use the transfinite induction on the length of the proof of from . The case is obvious; if is an axiom, then , so , and therefore . If was obtained by rule RK, ie. where is a theorem, then (by R2), that is, , so . The reasoning is analogous for cases of other inference rules that require a theorem as a premise. Now we consider the case where was obtained by rule RPC. The proof for the other infinitary rules is similar.

Let where . Then

, , by the induction hypothesis.

Suppose , for some .

, by the definition of