The Ditmarsch Tale of Wonders - The Dynamics of Lying

by   Hans van Ditmarsch, et al.

We propose a dynamic logic of lying, wherein a 'lie that phi' (where phi is a formula in the logic) is an action in the sense of dynamic modal logic, that is interpreted as a state transformer relative to the formula phi. The states that are being transformed are pointed Kripke models encoding the uncertainty of agents about their beliefs. Lies can be about factual propositions but also about modal formulas, such as the beliefs of other agents or the belief consequences of the lies of other agents. We distinguish (i) an outside observer who is lying to an agent that is modelled in the system, from (ii) one agent who is lying to another agent, and where both are modelled in the system. For either case, we further distinguish (iii) the agent who believes everything that it is told (even at the price of inconsistency), from (iv) the agent who only believes what it is told if that is consistent with its current beliefs, and from (v) the agent who believes everything that it is told by consistently revising its current beliefs. The logics have complete axiomatizations, which can most elegantly be shown by way of their embedding in what is known as action model logic or the extension of that logic to belief revision.



There are no comments yet.


page 1

page 2

page 3

page 4


True Lies

A true lie is a lie that becomes true when announced. In a logic of anno...

A logic for reasoning about ambiguity

Standard models of multi-agent modal logic do not capture the fact that ...

Dynamic Awareness

We investigate how to model the beliefs of an agent who becomes more awa...

Evidence and plausibility in neighborhood structures

The intuitive notion of evidence has both semantic and syntactic feature...

The Rationale behind the Concept of Goal

The paper proposes a fresh look at the concept of goal and advances that...

Learning Triadic Belief Dynamics in Nonverbal Communication from Videos

Humans possess a unique social cognition capability; nonverbal communica...

Applying Maxi-adjustment to Adaptive Information Filtering Agents

Learning and adaptation is a fundamental property of intelligent agents....
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

My favourite of Grimm’s fairytales is ‘Hans im Glück’ (Hans in Luck). A close second comes ‘The Ditmarsch Tale of Wonders’. In German this is called a ‘Lügenmärchen’, a ‘Liar’s Tale’. It is as follows.

I will tell you something. I saw two roasted fowls flying; they flew quickly and had their breasts turned to Heaven and their backs to Hell; and an anvil and a mill-stone swam across the Rhine prettily, slowly, and gently; and a frog sat on the ice at Whitsuntide and ate a ploughshare.

Four fellows who wanted to catch a hare, went on crutches and stilts; one of them was deaf, the second blind, the third dumb, and the fourth could not stir a step. Do you want to know how it was done? First, the blind man saw the hare running across the field, the dumb one called to the deaf one, and the lame one seized it by the neck.

There were certain men who wished to sail on dry land, and they set their sails in the wind, and sailed away over great fields. Then they sailed over a high mountain, and there they were miserably drowned.

A crab was chasing a hare which was running away at full speed; and high up on the roof lay a cow which had climbed up there. In that country the flies are as big as the goats are here.

Open the window that the lies may fly out. [19]

A passage like “A crab was chasing a hare which was running away at full speed; and high up on the roof lay a cow which had climbed up there.” contains very obvious lies. Nobody considers it possible that this is true. Crabs are reputedly slow, hares are reputedly fast.

In ‘The Ditmarsch Tale of Wonders’, none of the lies are believed.

In the movie ‘The Invention of Lying’111 the main character Mark goes to a bank counter and finds out he only has $300 in his account. But he needs $800. Lying has not yet been invented in the 20th-century fairytale country of this movie — that however seems to be in a universe very parellel to either the British Midlands or Brooklyn New York. Then and there, on the spot, Mark invents lying. We see some close-ups of Mark’s braincells doing heavy duty—such a thing has not happened before. And then, Mark tells the bank employee assisting him that there must be a mistake: he has $800 in his account. He is lying. She responds, oh well, then there must be a mistake with your account data, because on my screen it says you only have $300. I’ll inform system maintenance of the error. My apologies for the inconvenience. And she gives him $800! In the remainder of the movie, Mark gets very rich.

Mark’s lies are not as unbelievable as those in Grimm’s fairytale. It is possible that he has $800. It is just not true. Still, there is something unrealistic about the lies in this movie: new information is believed instantly. New information is even believed if it is inconsistent with prior information. After Mark’s invention of lying while obtaining $800, he’s trying out his invention on many other people. It works all the time! There are shots wherein he first announces a fact, then its negation, then the fact again, while all the time his extremely credulous listeners keep believing every last announcement. New information is also believed if it contradicts direct observation. In a café, in company of several of his friends, he claims to be a one-armed bandit. And they commiserate with him, oh, I never knew you only had one arm, how terrible for you. All the time, Mark is sitting there drinking beer and gesturing with both hands while telling his story.

In the movie ‘The Invention of Lying’, all lies are believed.

In the real world, if you lie, sometimes other people believe you and sometimes they don’t. When can you get away with a lie? Consider the consecutive numbers riddle [25].

Anne and Bill are each going to be told a natural number. Their numbers will be one apart. The numbers are now being whispered in their respective ears. They are aware of this scenario. Suppose Anne is told 2 and Bill is told 3.
The following truthful conversation between Anne and Bill now takes place:

  • Anne: “I do not know your number.”

  • Bill: “I do not know your number.”

  • Anne: “I know your number.”

  • Bill: “I know your number.”

Explain why is this possible.

Initially, Anne is uncertain between Bill having 1 or 3, and Bill is uncertain between Anne having 2 or 4. So both Anne and Bill do not initially know their number.

Suppose that Anne first says to Bill: “I know your number.” Anne is lying. Bill does not consider it possible that Anne knows his number. However, Anne did not know that Bill would not believe her. She considered it possible that Bill had 1, in which case Bill would have considered it possible that Anne was telling the truth, and would then have drawn the incorrect conclusion that Anne had 0.

Alternatively, suppose that the first announcement, Anne’s, is truthful, but that Bill is lying in the second announcement and says to Anne: “I know your number.” If Anne believes that, she will then say “I know your number,” as she believes Bill to have 1. Her announcement is an honest mistake, because this belief is incorrect. However, as a result of Anne’s announcement Bill will learn Anne’s number, so that his announcement “I know your number,” that was a lie at the time, now has become true.

That is, if you are still following us.

It seems not so clear how all this should be formalized in a logic interpreted on epistemic modal structures, and this is the topic of our paper.

In real life, some lies are believed and some are not.

1.1 The modal dynamics of lying

What is a lie? Let be a proposition. You lie to me that , if you believe that is false while you say that , and with the intention that I believe . The thing you say we call the announcement. If you succeed in your intention, I believe , and I also believe that your announcement of was truthful, i.e., that you believed that when you said that . In this investigation we abstract from the intentional aspect of lying. Such an abstraction seems reasonable. It is similar to that in AGM belief revision [2], wherein one models how to incorporate new information in an agent’s belief set, but abstracts from the process that made the new information acceptable to the agent. Our proposal is firmly grounded in modal logic. We employ dynamic epistemic logic [44].


What are the modal preconditions and postconditions of a lie? Let us for now assume that itself is not a modal proposition but a Boolean proposition. We further assume two agents, and . Agent will be assumed female and agent will be assumed male. Typically, in our exposition will be the speaker or sender and will be the receiver or addressee. However, and are not agent roles but agent names (and agent variables). We also model dialogue wherein agents speak in turn; so these roles may swap. Formula stand for ‘agent believes that ’. We use belief modalities and not knowledge modalities , because lying results in false beliefs, whereas knowledge modalities are used for correct beliefs.

The precondition of ‘ is lying that to ’ is ( is negation). Stronger preconditions are conceivable, e.g., that the addressee considers it possible that the lie is true, , or that the speaker believes that, . These conditions may not always hold while we still call the announcement a lie, because the speaker may not know whether the additional conditions are satisfied. We therefore will only require precondition .

We should contrast the announcement that by a lying agent with other forms of announcement. Just as a lying agent believes that is false when it announces , a truthful agent believes that is true when it announces . The precondition for a lying announcement by is , and so the precondition for a truthful announcement by is . Here, were should put up some strong terminological barriers in order to avoid pitfalls and digressions into philosophy and epistemology. Truthful is synonymous with honest. Now dictionaries, that report actual usage, do not make a difference between an agent telling the truth and an agent believing that she is telling the truth. A modal logician has to make a choice. We mean the latter, exclusively. A truthful announcement may therefore not be a true announcement. If is false but agent mistakenly believes that , then when she says , that is a truthful but false announcement of . Besides the truthful and the lying announcement there is yet another form of announcement, because in modal logic there are always three instead of two possibilities: either you believe , or you believe , or you are uncertain whether . The last corresponds to the precondition ( is disjunction). An announcement wherein agent announces while she is uncertain about we propose to call a bluffing announcement. The dictionary meaning for the verb bluff is ‘to cause to believe what is untrue’ or ‘to deceive or to feign’ in a more general sense. Its meaning is even more intentional than that of lying. Feigning belief in means suggesting belief in , by saying it, or otherwise behaving in accordance to it, although you do not have this belief. This corresponds to as precondition. This would make lying a form of bluffing, as implies . It is common and according to Gricean conversational norms to consider that saying something that you believe to be false is worse than (or, at least, different from) saying something that you do not believe to be true. This brings us to ( is conjunction), equivalent to .

To the three mutually exclusive (and complete) preconditions , , and we associate the truthful, lying, and bluffing announcement that , and we call the announcing agent a truthteller, liar and bluffer, respectively.222Here again, it is stretching usage that a truthteller may not be telling the truth but only what she believes to be the truth, but that cannot be helped. The three forms of announcement are intricately intertwined. This is obvious: to a credulous addressee a lying announcement appears to be a truthful announcement, whereas a skeptical addressee, who already believed the opposite of the announcement, has to make up his mind whether the speaker is merely mistaken, or is bluffing, or is even lying.


We now consider the postconditions of ‘ is lying that to ’. If ’s intention to deceive is successful, believes after the lie. Therefore, should be a postcondition of a successful execution of the action of lying. Also, the precondition should be preserved: should still true after the lie. In the first place, we propose logics to achieve this. However, this comes at a price. In case the agent already believed the opposite, , then ’s beliefs are inconsistent afterwards. (This merely means that ’s accessibility relation is empty, not that the logic is inconsistent.) There are two different solutions for this: either does not change his beliefs, so that still holds after the lie, or the belief is given up in order to consistently incorporate . The three alternative postconditions after the lie that are therefore: (i) always make true after the lie (even at the price of inconsistency), (ii) only make true if agent considered possible before the lie (), and (iii) always make true by a consistency preserving process of belief revision. These are all modelled.

Lying as a dynamic modal operator

The preconditions and postconditions of lying may contain epistemic modal operators (belief modalities). The action of lying itself is modelled as a dynamic modal operator. The dynamic modal operator for ‘lying that ’ is interpreted as an epistemic state transformer. An epistemic state is a pointed Kripke model (a model with a designated state) that encodes the beliefs of the agents. An epistemic action ‘agent lies that to agent ’ should transform an epistemic state satisfying into an epistemic state satisfying and . The execution of such dynamic modal operators for epistemic actions depends on the initial epistemic state and that operator’s description only. In that sense, they are different from dynamic modal operators for (PDL-style) actions that are interpreted using an accessibility relation in a given structure.

In this dynamic epistemic setting we can distinguish (i) the case of an external observer (an agent who is not explicitly modelled in the structures and in the logical language), who is lying to an agent modelled in the system, from (ii) the case of one agent lying to another agent, where both are explicitly modelled. For this external agent a truthful announcement is the same as a true announcement, and a lying announcement is the same as a false announcement.333This explains the terminological confusion in the area: the logic known as that of truthful public announcements is really the logic of true public announcements. These matters will be addressed in detail.

In dynamic epistemic logics the transmission of messages is instantaneous and infallible. This is another assumption in our modelling framework.

Lying about modal formulas

The belief operators do not merely apply to Boolean propositions but to any proposition with belief modalities. This is known as higher-order belief. In the semantics, the generalization from ‘lying that ’ to ‘lying that ’ for any proposition, does not present any problem. This is, because it will be defined relative to the set of states where the formula is believed by the speaker . For a subset of the domain, it does not matter if it was determined for a Boolean formula or for a modal formula. Still, there are other problems.

Firstly, we aim for agents having what is known as ‘normal’ beliefs, that satisfy consistency and introspection: , , and . If we wish the addressee to believe the lie that even if he already believed , his beliefs would become inconsistent. The property of consistent belief is therefore not preserved under lying updates. We will address this.

Secondly, we may well require that is true before the lie by that and that and are true after the lie that , but we cannot and do not even want to require for any that, if is true before the lie by that , then and are true afterwards. For a typical example, suppose that is false and that lies that to .444It is not raining in Sevilla. You don’t know that. I am lying to you: “You don’t know that it is raining in Sevilla!” I.e., using the conventional conversational implicature, “It is raining in Sevilla but/and you do not know that.” This is a Moorean sentence. We would say that the lie was successful if holds, not if holds, an inconsistency (for belief). Also, we do not want that , that was true before the lie, remains true after the lie. It is common in the area to stick to the chosen semantic operation but not to require persistence of belief in such cases.

Our proposed modelling, that also applies to modal formulas, allows us to elegantly explain why in the consecutive number riddle we can have that (see above) ‘as a result of Anne’s announcement Bill will learn Anne’s number, so that his prior announcement “I know your number,” that was a lie at the time, now has become true.’ The seemingly contradictory announcements in the riddle are Moorean phenomenona, and the added aspect of uncertainty about lying or truthtelling makes the analysis more complex, and the results more interesting.

More modalities

In the concluding Section 9 we discuss further issues in the modal logic of lying, such as group epistemic operators (common belief) in preconditions and postconditions of lying, and structures with histories of actions to keep track of the number of past lies.

1.2 A short history of lying

We conclude this introduction with an review of literature on lying.


Lying has been a thriving topic in the philosophical community for a long, long time [36, 11, 28, 29]. Almost any analysis starts with quoting Augustine on lying:

“that man lies, who has one thing in his mind and utters another in words”
“the fault of him who lies, is the desire of deceiving in the uttering of his mind” [5]

In other words: saying that while believing that , with the intention to make believe , our starting assumption. The requirements for the belief preconditions and postconditions in such works are illuminating [29]. For example, the addressee should not merely believe the lie but believe it to be believed by the speaker. Indeed, … and even believed to be commonly believed, would the modal logician say (see the final Section 9). Scenarios involving eavesdroppers (can you lie to an agent who is not the addressee?) are relevant for logic and multi-agent system design, and also claims that you can only lie if you really say something: an omission is not a lie [29]. Wrong, says the computer scientist: if the protocol is common knowledge, you can lie by not acting when you should have; say, by not stepping forward in the muddy children problem, although you know that you are muddy. The philosophical literature also clearly distinguishes between false propositions and propositions believed to be false but in fact true, so that when you lie about them, in fact you tell the truth. Gettier-like scenarios are presented, including delayed justification [34].555Suppose that you believe that and that you lie that . Later find out that your belief was mistaken because was really true. You can then with some justification say “Ah, so I was not really lying.” Much is said on the morality of lying [11] and on its intentional aspect. As said, we abstract from the intentional aspect of lying. We also abstract from its moral aspect.


Lying excites great interest in the general public. Lots of popular science books are written on the topic, typical examples are [38, 41]. In psychology, biology, and other experimental sciences lying and deception are related. A cuckoo is ‘lying’ if it is laying eggs in another bird’s nest. Two issues are relevant for our investigation. Firstly, that it is typical to be believed, and that lying is therefore the exception. We model the ‘successful’ lie that is indeed believed, unless there is evidence to the contrary: prior belief in the opposite. Secondly, that the detection of lying is costly, and that this is a reason to be typically believed. In logic, cost is computational complexity. The issue of the complexity of lying is shortly addressed in Section 9 on further research.


In economics, ‘cheap talk’ is making false promises. Your talk is cheap if you do not intend to execute an action that you publicly announced to plan. It is therefore a lie, it is deception [18, 22]

. Our focus is different. We do not model lying about planned actions but lying about propositions, and in particular on their belief consequences. Economists postulate probabilities for lying strategies and truthful strategies, to be tested experimentally. We only distinguish lies that are always believed from lies that (in the face of contradictory prior belief) are never believed.


Papers that model lying as an epistemic action, inducing a transformation of an epistemic model, include [6, 37, 8, 43, 24, 45]. Lying by an external observer has been discussed by Baltag and collaborators from the inception of dynamic epistemic logic onward [6]; the later [8] also discusses lying in logics with plausible belief, as does [43]. In [45] the conscious update in [17] is applied to model lying by an external observer. In [35] the authors give a modal logic of lying and bluffing, including intentions. Instead of bluffing they call this bullshit, after [16]. Strangely, in view of contradictory Moorean phenomena, they do not model lying as a dynamic modality. In [37, 24] the unbelievable update is considered; this is the issue consistency preservation for belief, as in our treatment of unbelievable lies (rejecting the lie that if you already believe ). The promising manuscript [26] allows explicit reference in the logical language to truthful, lying and bluffing agents (the authors call this ‘agent types’), thus enabling some form of self-reference.

Artificial intelligence?

Various of the already cited works could have been put under this header. But then it would not have been a question mark. Applications of epistemic logic in artificial intelligence typically are about knowledge and not about belief. Successful frameworks as interpreted systems and knowledge programs

[15] model multi- systems. Our analysis of the modal dynamics of lying aims to prepare the ground for AI applications involving belief instead of knowledge, and to facilitate determining the complexities of such reasoning tasks.666Argumentation theory, when seen as an area in AI, does of course model beliefs and their justificiations.

1.3 Contributions and overview

A main and novel contribution of our paper is a precise model of the informative consequences of two agents lying to each other, and a logic for that, including a treatment of bluffing. This agent-to-agent-lying, in the logic called agent announcement logic, is presented in Section 4, with an extended example in Section 5. A special, simpler, case is that of an outside observer who is lying to an agent that is modelled in the system. This (truthful and lying) public announcement logic is treated in Section 3. That section mainly contains results from [45]. Section 2 introduces the standard truthful (without lying) public announcement logic of which our proposals can be seen as variations. Section 6 on action models is an alternative perspective on the frameworks presented in Section 3 and Section 4. It anchors them in another part of dynamic epistemic logic. Section 7 contains another novel contribution. It adapts the logics of the Sections 3 and 4 to the requirement that unbelievable lies (if you hear but already believe ) should not be incorporated. Subsequently, Section 8 adapts these logics to the requirement that unbelievable lies, on the contrary, should be incorporated, but consistently so. This can be anchored in yet another part of the dynamic epistemic logical literature, involving structures with plausibility relations. All these logics have complete axiomatizations (which is unremarkable). An incidental novel contribution is how to resolve ambiguity between bluffing and lying with disjunctive normal forms (Proposition 4). Section 9 finishes the paper with considerations on the limitations of our approach and further research.

2 Truthful public announcements

The well-known logic of truthful public announcements [33, 7] is an extension of multi-agent epistemic logic. Its language, structures, and semantics are as follows. Given are a finite set of agents and a countable set of propositional variables .


The language of truthful public announcement logic is inductively defined as

where , and .777Unlike in the introduction, is a Boolean/propositional variable, and not any Boolean proposition. Without the announcement operators we get the language of epistemic logic.

Other propositional connectives are defined by abbreviation. For , read ‘agent believes formula ’. Agent variables are . For , read ‘after truthful public announcement of , formula (is true)’. The dual operator for the necessity-type announcement operator is by abbreviation defined as . If we say that is unbelievable and, consequently, if we say that is believable. This is also read as ‘agent considers it possible that ’.


An epistemic model consists of a domain of states (or ‘worlds’), an accessibility function , where each , for which we write , is an accessibility relation, and a valuation , where each represents the set of states where is true. For , is an epistemic state.

An epistemic state is also known as a pointed Kripke model. We often omit the parentheses. Four model classes will appear in this work. Without any restrictions we call the model class . The class of models where all accessibility relations are transitive and euclidean is called , and if they are also serial it is called . The class of models where all accessibility relations are equivalence relations is . Class is said to have the properties of belief, and to have the properties of knowledge.


Assume an epistemic model .

where the model restriction is defined as (= ), and .

A complete proof system for this logic for class is presented in [33]. Trivial variations are complete axiomatizations for the model classes and . The interaction axiom between announcement and belief is:


The interaction between announcement and other operators we assume known [44]. It changes predictably in the other logics we present. In the coming sections, we will only vary the dynamic part of the logic, and focus on that completely.

For an example of the semantics of public announcement, consider a situation wherein an agent is uncertain about , and receives the information that . The initial uncertainty requires a model consisting of two states, one where is true and one where is false. In view of the continuation, we draw all accessibility relations. For convenience, a state has been given the value of the atom true there as its name. The actual state is underlined.


In the actual state is true, and from the actual state two states are accessible: that -state and the -state. Therefore, the agent does not believe (as there is an accessible -state) and does not believe either (as there is an accessible -state). She is uncertain whether . The announcement results in a restriction of the epistemic state to the -state (it is false in the other state). On the right, the agent believes that . In the initial epistemic state it is therefore true that . Note that both on the left and on the right the accessibility relation is an equivalence relation. Indeed, the class is closed under truthful public announcements. The example models change of knowledge rather than the weaker change of belief. The class is not closed under truthful public announcements. Let us consider another example. Suppose agent incorrectly believes and wishes to process the truthful public announcement that .


On the left, we have that is true. The new information results in eliminating the -state, and consequently agent ’s accessibility relation becomes empty. She believes everything. The model on the left is , but the model on the right is not , it is not serial. However, the class is closed under truthful public announcements.

3 Logic of truthful and lying public announcements

The logic of lying public announcements complements the logic of truthful public announcements. They are inseparably tied to one another. For a clear link we need to use an alternative semantics for truthful public announcement logic. The results in this section are mainly from [45]. We expand the language of truthful public announcement logic with another inductive construct , for ‘after lying public announcement of , formula (is true)’; in short ‘after the lie that , ’. This is the language .

Definition (Language)

Truthful public announcement logic is the logic to model the revelations of a benevolent god, taken as the truth without questioning. The announcing agent is not modelled in public announcement logic, but only the effect of its announcements on the audience, the set of all agents. Consider a false public announcement, made by a malevolent entity, the devil. Everything he says is false. Everything is a lie. As in religion, god and the devil are inseparable and should be modelled simultaneously. As a semantics for this logic we employ an alternative to the semantics for public announcement logic from the previous section. This alternative is the semantics of conscious updates [17].888A public announcement of is a conscious update with the test for the entire set of agents (such updates can be for any subset of agents). In [17] these updates are interpreted on non-wellfounded sets, namely on rooted infinite trees, such as the tree unwinding of a pointed model. We present the simpler semantics for conscious update of [23]. We note that [17] and [33] were independent proposals for the semantics of public announcements. When announcing , instead of eliminating states where does not hold, one eliminates for each agent those pairs of its accessibility relation where does not hold in the second argument of the pair. The effect of the announcement of is that only states where is true are accessible for the agents. In other words, the semantics is arrow eliminating instead of state eliminating. We call this the believed public announcement. New information is accepted by the agents independent from the truth of that information.

Definition (Semantics of believed public announcement)

where epistemic model is as except that (with the domain of )

In [45], the believed public announcement of is called manipulative update with . The original proposal there is to view believed public announcement of as non-deterministic choice (as in action logics and PDL-style logics) between truthful public announcement of and lying public announcement of .

Definition (Semantics of truthful and lying public announcement)

If we now define by abbreviation as , then has the semantics of ‘believed public announcement that ’. The non-determinism of this operator (it has two executions, one when is true and one when is false) comes to the fore if we write it in the dual form: ; in other words, we can view as some , where is non-deterministic choice. The following result justifies that it is not ambiguous to write for ‘arrow elimination’ truthful public announcement but also for ‘state elimination’ truthful public announcement.

Proposition ([23, 45])

Let and . Then

The symbol stands for ‘is bisimilar to’, a well-known notion that guarantees that the models cannot be distinguished in the logical language [10]. State elimination seems simpler than arrow elimination. Having gone through the trouble of reinterpreting truthful public announcement in arrow elimination semantics, why did we not proceed in the other direction and reinterpret lying public announcement in state elimination semantics? This is not possible! Consider a model wherein all belief is correct, i.e., a model with equivalence relations encoding the beliefs of the agents. A state elimination semantics preserves equivalence, whereas lying inevitably introduces false beliefs: states not accessible to themselves. The axiom for belief after truthful public announcement remains what it was (Definition 2, and using Proposition 3) and the axiom for the reduction of belief after lying is as follows.

Definition (Axiom for belief after lying [45])

After the lying public announcement that , agent believes that , if and only if, on condition that is false, agent believes that after truthful public announcement that . To the credulous person who believes the lie, the lie appears to be the truth.

Proposition ([45])

The axiomatization of the logic of truthful and lying public announcements is complete (for the model class and for the model class ).


As for the logic of truthful public announcements, completeness is shown by a reduction argument. All formulas in are equivalent to formulas in (epistemic logic). By means of equivalences such as in the axiom for belief after lying one can rewrite each formula to an equivalent one without announcement operators. (As the class is not closed under updates with announcements, the logic is not complete for that class.)

For an example, we show the effect of truthful and lying announcement of to an agent in the model with uncertainty about . The actual state must be different in these models: when lying that , is in fact false, whereas when truthfully announcing that , is in fact true. For lying we get


whereas for truthtelling we get


The reduction principle in [17, 23] for the interaction between belief and believed announcement is, in terms of our language, . This seems to have a different shape, as the modal operator binds the entire implication. But it is indeed valid in our semantics (a technical detail we did not find elsewhere).



The -ed equivalence holds because from the semantics of truthful and lying announcement directly follows that and . The *-ed equivalence holds because

The logic of truthful and lying public announcement satisfies the property ‘substitution of equivalents’, which we have used repeatedly in the proof of Proposition 3, but the logic does not satisfy the property ‘substitution of variables’ (substitution of formulas for propositional variables preserves validity). For example, is valid but, clearly, is invalid.

4 Agent announcement logic

In the logic of lying and truthful public announcements, the announcing agent is an outside observer and is implicit. Therefore, it is also implicit that it believes that the announcement is false or true. In multi-agent epistemic logic, it is common to formalize ‘agent truthfully announces ’ as ‘the outside observer truthfully announces ’. However, ‘agent lies that ’ cannot be modelled as ‘the outside observer lies that ’. This is the main reason for a logic of lying. For a counterexample, consider an epistemic state with equivalence relations, modelling knowledge, where does not know whether , knows whether , and is true. Agent is in the position to tell the truth about . A truthful public announcement of (arrow elimination semantics, Definition 3) indeed simulates that truthfully and publicly announces .


Given the same model, now suppose is false, and that lies that . A lying public announcement of (it satisfies the required precondition ) does not result in the desired information state, because this makes agent believe her own lie. In fact, as she already knew , this makes ’s beliefs inconsistent.


Instead, a lie by to that should have the following effect:


After this lie we have that still believes that , but that believes that . (We even have that believes that and have common belief of , see Section 9.) We satisfied the requirements of a lying agent announcement, for believed lies. The precondition for agent truthtelling that is and the precondition for agent lying that is . Another form of announcement is bluffing. You are bluffing that , if you say that but are uncertain whether . The precondition for agent bluffing is therefore . If belief is implicit, we had only two preconditions for announcing : and , for truthtelling and lying. The ‘third’ would be which is . The devil can lie, but it cannot bluff. The postconditions of these three types of announcement are a function of their effect on the accessibility relation of the agents. The effect is the same for all three types. However, it is different for the speaker and for the addressee(s).

  • States where was believed by speaker remain accessible to speaker ;

  • States where was not believed by speaker remain accessible to speaker ;

  • States where was believed by speaker remain accessible to addressee ;

  • States where was not believed by speaker are no longer accessible to addressee .

These requirements are embodied by the following syntax and semantics.


The language of agent announcement logic is defined as

The inductive constructs ,, and stand for, respectively, truthfully announces , is lying that , and is bluffing that ; where agent addresses all other agents .


where is as except for the accessibility relation defined as ( is the domain of , and )

The principles for truthtelling, lying, or bluffing to are as follows. The belief consequences for the speaker are different from the belief consequences for the addressee(s).

Definition (Axioms for the belief consequences of announcements)

In other words, the liar knows that he is lying, but the dupe he is lying to, believes that the liar is telling the truth. The principles for truthtelling and bluffing are similar to that for lying, but with the obvious different conditions on the right hand side of the equivalences. The bluffer knows that he is bluffing, but the dupe he is bluffing to, believes that the bluffer is telling the truth. And in case the announcing agent is truthful, there is no discrepancy, both the speaker and the addressee believe that the consequences are those of truthtelling.


The axiomatization of the logic of agent announcements is complete.


Just as in the previous logics (see Proposition 3) completeness is shown by a reduction argument. All formulas in are equivalent to formulas in epistemic logic. In the axioms above, the announcement operator is (on the right) always pushed further down into any given formula. As before, the result holds for model classes , and , but not for . An alternative, indirect, completeness proof is that agent announcement logic is action model logic for a given action model, as will be explained in Section 6.

As an example illustrating the difference between a truthtelling, lying and bluffing agent announcement we present the following model wherein the adressee , hearing the announcement of by agent , considers all three possible. In fact, is bluffing, and ’s announcement that is false. After the announcement, incorrectly believes that , but is still uncertain whether . If the bottom-left state had been the actual state, would have been lying that , and if the bottom-right state had been the actual state, it would have been a truthful announcement by that .

Unbelievable announcements

The axiomatization of the logic of agent announcements is incomplete for (see the proof of Proposition 4) because of the problem of unbelievable announcements. In Sections 7 and 8 we present alternative logics wherein believable announcements (announcements of to addressee such that is true) are treated differently from unbelievable announcements (such that is true). These logics are complete for class .

Outside observer

Consider a depiction of an epistemic model. The outside observer is the guy or girl looking at the picture: you, the reader. She can see all different states. She has no uncertainty and her beliefs are correct. It is therefore that her truthful announcements are true and that her lying announcements are false. It is also therefore that ‘truthful public announcement logic’ is not a misnomer, it is indeed the logic of how to process new information that is true. We can model the outside observer as an agent , for ‘god or the devil’. (The model does not need to be a bisimulation contraction.)


Given an epistemic model , let be an agent with an accessibility relation that is the identity on . Then . Let not contain announcement operators, then is equivalent to , and is equivalent to .


In Definition 4, the accessibility of the addressees is adjusted to . As is the identity, is equivalent to . So, , as in Definition 3.

Lying about beliefs

Agents may announce factual propositions (Boolean formulas) but also modal propositions, and thus be lying and bluffing about them. In the consecutive numbers riddle the announcements ‘I know your number’ and ‘I do not know your number’ are modal propositions, and the agents may be lying about those.999In social interaction, untruthfully announcing epistemic propositions is not always considered lying with the negative moral connotation. Suppose we work in the same department and one of our colleagues, , is having a divorce. I know this. I also know that you know this. But we have not discussed the matter between us. I can bring up the matter in conversation by saying ‘You know that is having a divorce!’. But this is unwise. You may not be willing to admit your knowledge, because ’s husband is your friend, which I have no reason to know; etc. A better strategy for me is to say ‘You may not know that is having a divorce’. This is a lie. I do not consider it possible that you do not know that. But, unless we are very good friends, you will not laugh in my face to that and respond with ‘Liar!’. Could it be that lies about facts are typically considered worse than lies about epistemic propositions, and that the more modalities you stack in your lying announcement, the more innocent the lie becomes? (Details in Section 5.) For our target agents, that satisfy introspection (so that and are validities), the distinction between bluffing and lying seems to become blurred. If I am uncertain whether , I would be bluffing if I told you that , but I would be lying if I told you that I believe that . The announcement that satisfies the precondition . It is bluffing that (it is ). But the announcement that satisfies the precondition , the negation of the announcement. It is lying that (it is ). (Proof: is equivalent to , from which follows , and with negative introspection .) We would prefer to call both bluffing, and ‘ announces that ’ strictly or really ‘ announces that ’. A general solution to avoid such ambiguity involves more than merely stripping a formula of an outer operator: announcing that should also strictly be announcing that , and announcing that should strictly be announcing that that . We need recursion. In the following definition and proposition we assume an agent with consistent beliefs.

Definition (Strictly lying)

An announcement by that is strict iff is equivalent to or is equivalent to