Justification Based Reasoning in Dynamic Conflict Resolution

05/28/2019
by   Werner Damm, et al.
University of Oldenburg
0

We study conflict situations that dynamically arise in traffic scenarios, where different agents try to achieve their set of goals and have to decide on what to do based on their local perception. We distinguish several types of conflicts for this setting. In order to enable modelling of conflict situations and the reasons for conflicts, we present a logical framework that adopts concepts from epistemic and modal logic, justification and temporal logic. Using this framework, we illustrate how conflicts can be identified and how we derive a chain of justifications leading to this conflict. We discuss how conflict resolution can be done when a vehicle has local, incomplete information, vehicle to vehicle communication (V2V) and partially ordered goals.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

10/31/2019

Dynamic Conflict Resolution Using Justification Based Reasoning

We study conflict situations that dynamically arise in traffic scenarios...
10/01/1996

Mechanisms for Automated Negotiation in State Oriented Domains

This paper lays part of the groundwork for a domain theory of negotiatio...
04/11/2017

Scavenger 0.1: A Theorem Prover Based on Conflict Resolution

This paper introduces Scavenger, the first theorem prover for pure first...
03/19/2020

On the Detectability of Conflict: a Remote Sensing Study of the Rohingya Conflict

The detection and quantification of conflict through remote sensing moda...
10/13/2021

Dynamic Conflict Resolution of IoT Services in Smart Homes

We propose a novel conflict resolution framework for IoT services in mul...
12/15/2010

Data Conflict Resolution Using Trust Mappings

In massively collaborative projects such as scientific or community data...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

As humans are replaced by autonomous systems, such systems must be able to interact with each other and resolve dynamically arising conflicts. Examples of such conflicts arise when a car wants to enter the highway in dense traffic or simply when a car wants to drive faster than the preceding. Such “conflicts” are pervasive in road traffic and although traffic rules define a jurisdictional frame, the decision, e.g., to give way, is not uniquely determined but influenced by a list of prioritised goals of each system and the personal preferences of its user. If it is impossible to achieve all goals simultaneously, autonomous driving systems (ADSs) have to decide “who” will “sacrifice” what goal in order to decide on their manoeuvres. Matters get even more complicated when we take into account that the ADS has only partial information. It perceives the world via sensors of limited reach and precision. Moreover, measurements can be contradicting. An ADS might use V2V to retrieve more information about the world, but it inevitably has a confined insight to other traffic participants and its environment. Nevertheless, for the acceptance of ADSs, it is imperative to implement conflict resolution mechanisms that take into account the high dimensionality of decision making. These decisions have to be explained and in case of an incident, the system’s decisions have to be accountable.

In this paper we study conflict situations as dynamically occurring in road traffic and develop a formal notion of conflict between two agents. We distinguish several types of conflicts and propose a conflict resolution process where the different kinds of conflicts are resolved in an incremental fashion. This process successively increases the required cooperation and decreases the privacy of the agents, finally negotiating which goals of the two agents have to be sacrificed. We present a logical framework enabling the analysis of conflicts. This framework borrows from epistemic and modal logic in order to accommodate the bookkeeping of evidences used during a decision process. The framework in particular provides a mean to summarise consistent evidences and keep them apart from inconsistent evidences. We hence can, e.g., fuse compatible perceptions into a belief about the world and fuse another set of compatible perceptions to a belief and model decisions that take into account that might contradict . Using the framework we illustrate how conflicts can be explained and algorithmically analysed as required for our conflict resolution process. Finally we report on a small case study using a prototype implementation (employing the Yices SMT solver [1]) of the conflict resolution algorithm.

Outline.

In Sect.  2 we introduce the types of conflict on a running example and develop a formal notion of conflict between two agents. We elaborate on the logical foundations for modelling and analysing conflicts and the logical framework itself in Sect.  3. We sketch our case study on conflict analysis in Sect.  4 and outline in Sect.  4.2 an algorithm for analysing conflict situations as requested by our resolution protocol and for deriving explanation of the conflict for the resolution. Before drawing the conclusions in Sect.  6, we discuss related work in Sect.  5.

2 Conflict

Already in 1969 in the paper “Violence, Peace and Peace Research” [2] J. Galtung presents his theory of the Conflict Triangle, a framework used in the study of peace and conflict. Following this theory a conflict comprises three aspects: opposing actions, incompatible goals, inconsistent beliefs (regarding the reasons of the conflict, knowledge of the conflict parties,…).

We focus on conflicts that arise dynamically between two agents in road traffic. We develop a characterisation of conflict as a situation where one agent can accomplish its goals with the help of the other, but both agents cannot accomplish all their goals simultaneously and the agents have to decide what to do based on their local beliefs. In Sect.  2.1 we formalise our notion of conflict. For two agents with complete information, we may characterise a conflict as: Agents A and B are in conflict, if A would accomplish its set of goals , if B will do what A requests, while B would accomplish its set of goals , if A will do what B requests, and it is impossible to accomplish the set of goals . A situation where A and B both compete to consume the same resource is thus an example of a conflict situation. Since we study conflicts from the view-point of an agent’s beliefs, we also consider believed conflicts, which can be resolved by sharing information regarding the others observations, strategies or goals. To resolve a conflict we propose a sequence of steps that require an increasing level of cooperation and decreasing level of privacy – the steps require to reveal information or to constrain acting options. Our resolution process defines the following steps: Shared situational awareness Sharing strategies Sharing goals Agreeing on which goals to sacrifice and which strategy to follow Corresponding to to , we introduce different kinds of conflicts on a running example – a two lane highway, where one car, A, is heading towards an obstacle at its lane and at the lane to its left a fast car, B, is approaching from behind (cf. Fig.  1). An agent has a prioritised list of goals (like 1. “collision-freedom”, 2.“changing lane” and 3. “driving fast”). We assume that an agent’s goals are achievable. Figure 1: Car A wants to circumvent the obstacle (grey box). Car B is approaching from behind. An agent A has a set of actions and exists within a world. At a time the world has a certain state. The world “evolves” (changes state) as determined by the chosen actions of the agents within the world and events determined by the environment within the world. The agent perceives the world only via a set of observation predicates, that are predicates whose valuation is determined by an observation of the agent. Without an observation the agent has no (direct) evidence for the valuation of the respective observation predicate.

Example 1.
Let car A want to change lane. It perceives that it is on a two lane highway, the way ahead is free for the next 500 m and B is approaching. Let A perceive B’s speed via radar. That is A makes the observation car B is fast justified by the evidence radar. We annotate this briefly as radar:car B is fast. Further let A derive from lidar data that B is slow – lidar:car B is slow.
In this situation we say agent A has contradicting evidences. Certain evidences can be combined without contradiction and others not. We assume that an agent organises its evidences in maximal consistent sets (i.e., justification graphs of Sect.  3), where each represents a set of possible worlds:
Example 2.
There are possible worlds of A where it is on a two lane highway, the way ahead is free for the next 500 m and B is slowly approaching. Analogously A considers possible worlds where B is fast. The state of the world outside of its sensors’ reach is unconstrained.
Observing the world (for some time), an agent A assesses what it can do to achieve its goals in all possible worlds. That is, A tries to find a strategy that guarantees to achieve its goals in all its possible worlds. A strategy determines at each state the action of the agent – the agent decides for an action based on its beliefs formed in the past regarding its possible worlds. If there is one such strategy for A to accomplish its goals , then A has a (believed) winning strategy for . This strategy might not be winning in the ”real” world though, e.g., due to misperceptions.
Example 3.
Let A want to drive slowly and comfortably. A wants to avoid collisions and it assumes that also B wants to avoid collisions. Although A has contradicting evidences on the speed of B and hence believes that it is possible that “B is fast” and also that “B is slow”, it can follow the strategy to stay at its lane and wait until B has passed. This strategy is winning in all of A’s possible worlds.

Even when A has no believed winning strategy, it can have a winning strategy for a subset of possible worlds. Additional information on the state of world might resolve the conflict by eliminating possible worlds. We call such conflicts observation-resolvable conflicts.

Example 4.

Let A want to change lane to circumvent the obstacle. It is happy to change directly after B but only if B is fast. If B is slow, it prefers to change before B passed. Further let A have contradicting evidences on the speed of B. A considers a conflict with B possible in some world and hence has no believed winning strategy. Now it has to resolve its inconsistent beliefs. Let B tell A, it is fast, and A trust B more than its own sensors, then A might update its beliefs by dismissing all worlds where B is slow. Then “changing after B passed” becomes a believed winning strategy.

In case of inconsistent evidences, as above, A has to decide how to update its beliefs. The decision how to update its beliefs will be based on the analysis of justifications (cf. Sect.  3) of (contradicting) evidences. The lidar contradicts the radar and B reports on its speed. Facing the contradiction of evidences justified by lidar and radar A trusts the evidence justified by B.

Let the agents already have exchanged observations and A still have no believed winning strategy. A conflict might be resolved by communicating part of the other agent’s (future) strategy:

Example 5.

Let A want to change lane. It prefers to change directly after B, if B passes A fast. Otherwise, A wants to change in front of B. Let B so far away that B might decelerate, in which case it might slow down so heavily that A would like to change in front of B even if B currently is fast.

Let A believe “B is fast”. Now A has no believed winning strategy, as B might decelerate. According to (C2), information about parts of the agent’s strategies are now communicated. A asks B whether it plans to decelerate. Let B be cooperative and tell A that it will not decelerate. Then A can dismiss all worlds where B slows down and “changing after B passed” becomes a believed winning strategy for A.

Let the two agents have performed steps and , i.e., they exchanged missing observations and strategy parts, and still A has no winning strategy for all possible worlds.

Example 6.

Let now, in contrast to Ex.  5, B not tell A whether it will decelerate. Then step (C3) is performed. So A asks B to respect A’s goals. Since A prefers B to be fast and B agrees to adopt A’s goal as its own, A can again dismiss all worlds where B slows down.

Here the conflict is resolved by communicating goals and the agreement to adopt the other’s goals. So an agent’s strategy might change in order to support the other agent. We call this kind of conflicts goal-disclosure-resolvable conflicts.

The above considered conflicts can be resolved by some kind of information exchange between the two agents, so that the sets of an agent’s possible worlds is adapted and in the end all goals of A and of B are achievable in all remaining possible worlds. The price to pay for conflict resolution is that the agents will have to reveal information. Still there are cases where simply not all goals are (believed to be) achievable. In this case A and B have to negotiate which goals shall be accomplished. While some goals may be compatible, other goals are conflicting. We hence consider goal subsets of for which a combined winning strategy for A and B exists to achieve . We assume that there is a weight assignment function that assigns a value to a given goal combination based on which decision for a certain goal combination is taken. This weighting of goals reflects the relative value of goals for the individual agents. Such a function will have to reflect, e.g., moral, ethics and jurisdiction.

Example 7.

Let A’s and B’s highest priority goal be collision-freedom, reflected in goals and . Further let A want to go fast and change lane immediately . Let also B want to go fast , so that A cannot change immediately. Now in step (C4) A and B negotiate what goals shall be accomplished. In our scenario collision-freedom is valued most, and B’s goals get priority over A’s, since B is on the fast lane. Hence our resolution is to agree on a strategy accomplishing , which is the set of goals having the highest value among all those for which a combined winning strategy exists.

Note that additional agents are captured as part of the environment here. At each step an agent can also decide to negotiate with some other agent than B in order to resolve its conflict.

2.1 Formal Notions

In the following we introduce basic notions to define a conflict. Conflicts, as introduced above, arise in a wide variety of system models, but we consider in this paper only a propositional setting.

Let ,…, , and be functions. We will write if and only if for all . Note that for any given as above the decomposition into its components is uniquely determined by the projections of onto the corresponding codomain.

Each agent A has a set of actions . The sets of actions of two agents are disjoint. To formally define a (possible) world model of an agent A, let be a set of states and be a set of propositional variables. represents the set of belief propositions. A state of a (possible) world is labelled with a subset that is (assumed to be) true. is (assumed to be) false.

A (possible) world model for an agent A is a transition system over with designated initial state and current state, all states are labelled with the belief propositions that hold at that state and transitions labelled with actions with an action of agent A, an action of agent B and an action of the environment.

The set of actions of an agent includes send and receive actions via which information can be exchanged, the environment guarantees to transmit a send message to the respective receiver. Formally a possible world is with

  • ,

  • ,

  • ,

  • , s.t.

    • : ( is the initial state)

    • : ,
      (the current state is reachable from the initial state)
      ( is linear between and )

The part of between and represents the history of the current state. A finite run in is a sequence of states with .

There is one “special” world model that represents the ground truth, i.e., it reflects how the reality evolves. An agent A considers several worlds possible at a time. This is, at each state of the real world, A has a set of possible worlds . The real world changes states according to the actions of A, B and Env. The set of possible worlds changes to due to the passing of time and due to belief updates triggered by e.g. observations. For the scope of this paper though, we do not consider the actual passing of time, but study the conflict analysis at a single state of the real world from the point of view of an agent. Since at each state an agent A may consider several worlds possible, it may also consider several histories possible. A strategy is hence a function , that determines an action for A based on the set of possible histories. represents a set of histories, where a history is given via the sequence of valuations of along the path from to . The set of possible histories at state is the union of histories of possible worlds , denoted as .

Let be a run in and be a sequence of actions of agent B and Env along . follows strategy , i.e., , if . We also write to denote the set of runs of that follow .

We use linear-time temporal logic (LTL) to specify goals (cf. Def. Def.  9). For a run r and a goal (or a conjunction of goals) , we write if r satisfies , i.e., the valuation of propositions along the state sequence satisfies .111We assume that runs are infinite here. In case of finite runs, we make them infinite by repeating the last state infinitely often. We say is a (believed) winning strategy for in , if all runs of that follow also satisfy , . We say that is a (believed) winning strategy of A for at the real world state if is a winning strategy for in all possible worlds .

An agent A has a set of goals and a weight assignment function that assigns values to a given goal combination. We write as shorthand for . We say is a believed achievable goal at real world if there is a strategy , that is winning for the conjunction of all goals in all possible worlds . We say is a believed maximal goal at real world state if its is a believed achievable goal and for all believed achievable goals it holds that . The empty subgoal is defined to be true (). For each world possible agent A also has

  1. beliefs on the goals of B, , and

  2. beliefs on the importance of subgoals of to B, , and

  3. a set of justifications for , and .

So at state of the real world an agent A has belief . The justifications support decision making by keeping track of (source or more generally meta) information. They hence can influence decisions on how to update an agent’s knowledge, how to negotiate and what resolutions are acceptable.

Our notion of conflict captures the following concept: Let be the set of maximal goals that A beliefs it can achieve with the help of B. But since B might choose a strategy to accomplish some of its maximal goals, A believes that it is in a conflict with B, if it cannot find one winning strategy that fits all possible strategy choices of B.

Definition 1 (Believed Possible Conflict).

Let be the set of maximal subgoals of A at state for which a believed winning strategy in exists.

Agent A believes at state it is in a possible conflict with B, if for each of its winning strategies for a maximal subgoal ,

  • there is a strategy and a possible world such that is a winning strategy in for , a believed maximal subgoal of the believed goals of B in .

  • but is not a winning strategy for in .

The above notion of conflict captures that A analyses the situation within its possible worlds . It assumes that B will follow some winning strategy to accomplish its own goals, while Env is assumed to behave fully adversarial. believes that beliefs that one of A’s possible worlds represents the reality. It is an interesting future extension to also allow A having more complicated beliefs about the beliefs of B, as already well supported by the logical framework introduced in Sect.  3. For instance we can capture situations like considers it possible that there is an obstacle on the road, while it believes that believes there is no obstacle. This extension does change the base line of our contribution but makes the following presentation more complex. So we refrain from considering beliefs about beliefs for the sake of comprehensibility.

For an example of the conflict notion, consider a situation where A drives on a highway side by side of B and A just wants to stay collision-free, A does not believe to be in a conflict situation when it believes that B also prioritizes collision-freedom, since B will not suddenly choose to crash into A which would violate its own goal. But in case B has no strategy to accomplish collision-freedom (assume a broken car in front of B) within , then A assumes that B behaves arbitrarily (achieving its remaining goal ) and A believes to be in conflict with B.

2.2 Applying the Formal Notion

In this subsection we consider the formal notions introduced in the previous subsection and illustrate them – focusing on the examples given at the start of this section.

Propositional Characterisation of the World

For the sake of a small example, let us consider the following propositional characterisation of a world: For each agent there is a pair of variables storing its position in the road. Further each agent drives a certain speed abstracted to three different levels, encoding slow, medium and fast speed levels. We consider only time bounded properties. The evolution along the observed time window is captured via copies of , where encodes the observed time points. Each agent can change lane, encoded by increasing or decreasing , and choose between three different speeds, that is, (i) decelerate inducing a change from fast to medium, or, medium to slow, respectively, or (ii) accelerates from slow to medium, or, from medium to fast, respectively.

A Real World Model

In this setting each state of the real world model is labelled with propositions and there are transitions from a state to a state labelled (s,s’)=( ), where and . encodes whether agent X chooses to perform a lane change and sc encodes how X chooses to change its speed, i.e., to accelerate, decelerate or to keep its speed. The target state is labelled according to effect of the chosen action.

The initial state encodes the start situation (of the tour) and the subgraph from the initial state to the current state captures the observed past. A world model has a branching structure from the current state towards the future into the possible different options of lane changing and choices of speed change. Such a world model describes the past, the current state of the world and possible future evolutions. For each point in time there is hence such a world model. See Fig.  2 for a sketch of an example of a world model at a time .

Figure 2: The transition labelling  is sketched within the figure itself. The state labelling is omitted there. Let us assume that the initial state () is labelled with , , , , , , , , describing the situation of Fig.  1. Currently we are at the time . A, B and the environment (determining the moves of the obstacle) have done three moves: (1) all three stayed at their respective lane and kept their speed, (2) the same but B accelerates and (3) same as (1). The state labelling reflects the changes induced by the chosen moves. So the propositions that are true at, e.g., differ from the one of only in terms of the respective positions: , , , , , , , , .

Possible Worlds

Additional to labelling of states and transitions, the real world is also labelled with beliefs of the agents at that time. The gist is

  1. the real world model captures the past, presence and the possible futures at a time .

  2. At time an agent within world model considers a set of worlds possible. This belief is justified by e.g. evidences from its sensors.

An example is sketched in Fig.  3, where only A’s beliefs are sketched. Note that the state labelling, i.e. the set of true propositions, is not specified in Fig.  3 in order to declutter the figure. Some state labelling is given in Fig.  4.

Figure 3: The real world to the right is associated with beliefs of agent A. A considers at time two worlds as possible, one, M1, bisimilar to the real world and a second one, M2, where B accelerates as its first move.

Let for Fig.  3 be the initial states of the real and the possible worlds be identically labelled, i.e., the agent believes in the ”real” past.

Let us now consider Ex.  1. Agent A has evidence for B being fast and it also has evidence for B being slow. Fig.  4 illustrates that agent A considers the two (sets of) worlds possible that differ in the valuation of the respective state propositions. Agent A believes that a world is possible where B is fast – this is justified by its radar data–, and A considers a world possible where B is slow – justified by its lidar.

We assume that an agent considers any world possible that can be justified by some non-empty set of consistent evidences. So a possible world satisfies e.g. a set of constraints that is derived from the agent’s observations, i.e., the sensory evidences, and it also has to be compatible to the agent’s laws/rules about the world, e.g., physical laws.

The evidences provided by radar and lidar in Ex.  1 imply constraints and . These constraints are contradictory and hence there is no possible world that satisfies both constraints. So there cannot be an arrow in Fig.  4 from the real world to a possible world that is labelled with a justification set containing both justifications, = radar, lidar. Nevertheless, the radar and lidar evidences justify that agent A believes in alternative worlds (e.g., B is fast, so it is possible that (a) B was driving at medium speed and accelerated or (b) B was fast and kept its speed.)

Figure 4: A considers currently two sets of worlds possible, one set contains all possible worlds where B is slow in accordance to the lidar and all worlds in the other set satisfy that B is fast in accordance to the the radar.

Strategy and Possible Worlds

Let us formalise Ex.  3. A considers worlds possible where it has the evidence radar:car B is fast and hence considers worlds possible where B is fast, and also worlds where B is not fast, due to its evidence lidar:car B is slow. We already sketched the possible worlds of A above. In order to specify the goals of A and the goals A believes B has, we use the usual LTL operators222 denotes the finally modal operator, denotes globally, denotes until and in addition we use to express that within the next steps has to be true and likewise to specify that at all times up to has to hold.

A wants to drive slowly and comfortably, and avoid collisions . A also assumes that B wants to avoid collisions, . The weight assignment to subsets of goals for A is , and for all all other subsets . This expresses that collision-freedom is indispensable. Further A believes collision-freedom is also indispensable for B. Additionally, A derives from a constraint that expresses that B will not jeopardize collision-freedom and hence it will not drive irrationally into A. This constraint further restricts the set of worlds that A considers possible (cf. Fig.  5).

Figure 5: A derived additional constraints from B goals that constrain its set of possible worlds.

In this situation, A decides on its next move. It is not aware of the state of real world and decides only based on its current beliefs regarding the possible worlds and associated goals of B and goal weights. A determines that staying on its current lane and not changing its speed now is a good move since it can stop and wait in any case, i.e., this move is the prefix of a winning strategy in M1 and all other possible worlds, in which B is slow, and also in M2 and all other possible worlds that satisfy that B is fast (cf. Fig.  4).

Conflicts

In Ex.  4 A has the goals

avoid collisions and

change lane change_lane and

change lane before B has passed, if B is slow, change_lane and

do not change before B has passed, if B is fast, . We assume here that A has only short term goals and global goals are determined at a higher level.333Note that also collision-freedom might be sacrificed in so-called dilemma situations. A also assumes that B wants to avoid collisions. The weight assignment to subsets of goals for A is specified as follows 444Note, that does not imply . , for all all other subsets.

Obviously A has a winning strategy , i.e., if it could determine also B’s future moves. In this case it can achieve . If A assumes that B follows a strategy achieving B’s own goals under the assumption that A will cooperate (i.e. B can rule out that A changes lane, forcing B to decelerate), then B can e.g. make up a winning strategy , where A stays at lane 1, B at lane 2 and B chooses its speed arbitrarily without endangering collision-freedom. A does not have a winning strategy for all these strategies of B, since A cannot follow the same strategy if (i) B is fast in its next three steps and if (ii) B is not fast in at least one of the next three steps.

If B tells A how fast it will go in its next three steps, the additional information provided by B, makes A dismiss all possible worlds that do not satisfy the evidence on B’s future behaviour. A can determine an appropriate strategy for all (remaining) possible worlds and the conflict situation is hence resolved.

3 Epistemic Logic, Justifications and Justification Graph

Conflict analysis demands to know who believes to be in conflict with whom and what pieces of information made him belief that he is in conflict. To this end we introduce the logic of justification graphs that allows to keep track of external information and extends purely propositional formulae by so called belief atoms (cf. p. 3.1), which are used to label the sources of information. In Sect.  2 we already used such formulae, e.g., ”radar:car B is fast”. Our logic provides several atomic accessibility relations representing justified beliefs of various sources, as required for our examples of Sect.  2. It provides justification graphs as a mean to identify belief entities which compose different justifications to consistent information even when the information base contains contradicting information of different sources, as required for analysing conflict situations.

First, this section provides a short overview on epistemic modal logics and multi-modal extensions thereof. Such logics use modal operators to expressing knowledge and belief stemming from different sources. Often we will refer to this knowledge and belief as information, especially when focusing on the sources or of the information. Thereupon the basic principles of justifications logics are shortly reviewed. Justification logics are widely seen as interesting variants to epistemic logics as they allow to trace back intra-logical and external justifications of derived information. In the following discussion it turns out that tracing back external justifications follows the same principles as the distribution of information over different sources.

Consequently, the concept of information source and external justification are then unified in our variant of an epistemic modal logic. This logic of justification graphs extends the modal logic by a justification graph. The nodes of a justification graph are called belief entities and represent groups of consistent information. The leaf nodes of a justification graphs are called belief atoms, which are information source and external justifications at the same time, as they are the least constituents of external information. We provide a complete axiomatisation with respect to the semantics of the logic of justification graphs.

3.1 Justification Graphs

Modal Logic and Epistemic Logic.

Modal logic extends the classical logic by modal operators expressing necessity and possibility. The formula is read as is necessary” and is read as is possible”. The notions of possibility and necessity are dual to each other, can be defined as . The weakest modal logic extends propositional logic by the axiom and the necessitation rule as follows

(K)
(Nec)

The axiom ensures that whenever and necessarily hold, then also necessarily has to hold. The necessitation rule allows to infer the necessity of from any proof of and, hence, pushes any derivable logical truth into the range of the modal operator . This principle is also known as logical awareness. Various modalities like belief or knowledge can be described by adding additional axioms encoding the characteristic properties of the respective modal operator. The following two axioms are useful to model knowledge and belief:
(T)
(D)

The axiom and relate necessity with the factual world. While the truth axiom characterises knowledge as it postulates that everything which is necessary is also factual, characterises belief as it postulates the weaker property that everything which is necessary is also possible. Under both axioms holds, i.e. a necessary contradiction yields also a factual contradiction.

Multi-modal logics are easily obtained by adding several modal operators with possibly different properties and can be used to express the information of more than one agent. E.g., the formula expresses that the piece of information belongs to the modality . Modal operators can also be used to represent modalities referring to time. E.g., in the formula the temporal modality expresses that will hold in the next time step. An important representative of a temporal extension is linear temporal logic ().

In multi-agent logics the notions of common information and distributed information play an important role. While common knowledge captures the information which is known to every agent , we are mainly interested in information that is distributed within a group of agents . The distributed information within a group contains any piece of information that at least one of the agent , …, has. Consequently, we introduce a set-like notion for groups, where an agent is identified with the singleton group and the expression is used to denote that is distributed information within the group . The distribution of information is axiomatised by

(Dist)

Note that groups may not be empty. The modal logic for distributed information contains for every group at least the axiom , the necessitation rule , and the axiom for any group with .

Justification Logics.

Justification logics [3] are variants of epistemic modal logics where the modal operators of knowledge and belief are unfolded into justification terms. Hence, justification logics allow a complete realisation of Plato’s characterisation of knowledge as justified true belief. A typical formula of justification logic has the form , where is a justification term built from justification constants, and it is read as “ is justified by ”. The basic justification logic results from extending propositional logic by the application axiom and the sum axioms

(Appl)
(Sum)

where , , , , and are justification terms which are assembled from justification constants using the operators and according to the axioms. Justification logics tie the epistemic tradition together with proof theory. Justification terms are reasonable abstractions for constructions of proofs. If is a proof of and is a proof of then the application axiom postulates that there is a common proof, namely , for . Moreover, if we have a proof for and some proof then the concatenations of both proofs, and , are still proofs for . In our framework we were not able to derive any meaningful example using the sum axiom of justification logic. Therefore this axiom is omitted in the following discussion.

Discussion.

All instances of classical logical tautologies, like and , are provable in justification logics. But in contrast to modal logics, justification logics do not have a necessitation rule. The lack of the necessitation rule allows justification logics to break the principle of logical awareness, as is not necessarily provable for an arbitrary justification term . Certainly, restricting the principle of logical awareness is attractive to provide a realistic model of restricted logical resources. Since we are mainly interested in revealing and resolving conflicts, the principle of logical awareness is indispensable in our approach.

Nevertheless, justification logic can simulate unrestricted logical awareness by adding proper axiom internalisation rules for all axioms and justification constants . In such systems a weak variant of the necessitation rule of modal logic holds: for any derivation there exists a justification term such that holds. Since was derived using axioms and rules only, also the justification term is exclusively built from justification constants dedicated to the involved axioms. Beyond that, is hardly informative as it does not help to reveal external causes of a conflict. Hence, we omit the axiom internalisation rule and add the modal axiom and the modal necessitation rule for any justification term to obtain a justification logic where each justification term is closed under unrestricted logical awareness.

An important consequence of the proposed system is that becomes virtually idempotent and commutative.555For any instance of there is an instance of in the proposed system. Moreover, it is an easy exercise to show that any instance of is derivable in the proposed system. These insights allows us to argue merely about justification groups instead of justification terms. It turns out that a proper reformulation of with regard to justification groups is equivalent to , finally yielding the same axiomatisation for distributed information and compound justifications.

Belief Atoms, Belief Groups, and Belief Entities.

So far, we argued that assembling distributed information and compound justifications follow the same principle. In the following we even provide a unified concept for the building blocks of both notions. A belief atom is the least constituent of external information in our logic. To each we assign the modal operator . Hence, for any formula also is a formula saying has information . Belief atoms play different roles in our setting. A belief atom may represent a sensor collecting information about the state of the world, or it may represent certain operational rules as well as a certain goal of the system. The characteristic property of a belief atom is that the information of a belief atom has to be accepted or rejected as a whole. Due to its external and indivisible nature, is the only source of evidence for its information. The only justification for information of is itself. Consequently, can also be read as is the justification for . This is what belief atoms and justifications have in common: either we trust a justification or not.

The information of a system is distributed among its belief atoms. The modal logic for distributed information allows us to consider the information which is distributed over a belief group. While belief groups can be built arbitrarily from belief atoms, we also introduce the concept of belief entities. A belief entity is either a belief atom, or a distinguished group of belief entities. Belief entities are dynamically distinguished by a justification graph. In contrast to belief groups, belief entities and belief atoms are not allowed to have inconsistent information. Hence a justification graph allows us to restrict the awareness of extra-logical evidences – so we can distinctively integrate logical resources that have to be consistent.

Justification Graphs.

Let be a set of propositional variables and let be the set of belief entities. The designated subset of denotes the set of belief atoms.

Definition 2 (Language of Justification Graphs).

A formula is in the language of justification graphs if and only if is built according to the following BNF, where and :

Using the descending sequence of operator precedences (, , , , , ), we can define the well-known logical connectives , , and from and . Often, we omit brackets if the formula is still uniquely readable. We define to be right associative. For singleton sets we also write instead of . The language allows the usage of temporal operators for next time (), previous time (), until (), and since (). Operators like always in the future () or always in the past () can be defined from the given ones.

Definition 3 (Justification Graph).

A justification graph is a directed acyclic graph whose nodes are belief entities of . An edge denotes that the belief entity has the component . The set of all direct components of an entity is defined as .

The leaf nodes of a justification graph are populated by belief atoms, i.e. for any belief entity it holds if and only if .

Definition 4 (Axioms of a Justification Graph).

Let be a justification graph. The logic of a justification graph has the following axioms and rules.

  1. As an extension of propositional logic the rule of modus ponens has to hold: from and conclude . Any substitution instance of a propositional tautology is an axiom.

  2. Belief groups are closed under logical consequence and follow the principle of logical awareness. Information is freely distributed along the subgroup-relation. For any belief group the axiom and the necessitation rule hold. For groups and with the axiom holds.

  3. Belief entities are not allowed to have inconsistent information. Non-atomic belief entities inherit all information of their components. For any belief entity the axiom holds. If is a subgroup of the components of , then the axiom holds.

  4. In order to express temporal relation the logic for the justification graph includes the axioms of Past-LTL (LTL with past operator). A comprehensive list of axioms can be found in [4].

  5. Information of a belief entity and time are related. The axiom ensures that every belief entity correctly remembers its prior beliefs and establishes a principle which is also known as perfect recall (e.g., see [5]).

Definition 5 (Proof).

Let be a justification graph. A proof (derivation) of in is a sequence of formulae with such that each is either an axiom of the justification graph or is obtained by applying a rule to previous members with . We will write if and only if such a sequence exists.

Definition 6 (Proof from a set of formulae).

Let be a justification graph and be a set of formulae. The relation holds if and only if for some finite subset with .

Definition 7 (Consistency with respect to a justification graph).

Let be a justification graph.

  1. A set of formulae is -inconsistent if and only if . Otherwise, is -consistent. A formula is -inconsistent if and only if is -inconsistent. Otherwise, is -consistent.

  2. A set of formulae is maximally -consistent if and only if is -consistent and for all the set is -inconsistent.

Semantics.

Let be the state space, that is the set of all possible states of the world. An interpretation over is a mapping that maps each state to a truth assignment over , i.e.  is the subset of all propositional variables which are true in the state . A run over is a function from the natural numbers (the time domain) to . The set of all runs is denoted by .

Definition 8.

Let be a justification graph. A Kripke structure for is a tuple where

  1. is a state space,

  2. is the set of all runs over ,

  3. is an interpretation over ,

  4. each in is an individual accessibility relation for a belief entity in .

Definition 9 (Model for a Justification Graph).

Let be a Kripke structure for the justification graph , where

  1. is a serial relation for any belief entity ,

  2. is defined as for any belief group ,

  3. holds for all non-atomic belief entities and any subgroup .

We recursively define the model relation as follows:

When holds, we call a pointed model of for . If is a pointed model of for , then we write and say that the run satisfies . Finally, we say that is satisfiable for , denoted by if and only if there exists a model and a run such that holds.

Proposition 1 (Soundness and Completeness).

The logic of a justification graph is a sound and complete axiomatisation with respect to the model relation . That is, a formula is -consistent if and only if is satisfiable for .

While the soundness proof is straightforward, a self-contained completeness proof involve lengthy sequences of various model constructions and is far beyond the page limit. However, it is well-known, (e.g., [6]), that , the -agent extension of with distributive information is a sound and complete axiomatisation with respect to the class of Kripke structures having arbitrary accessibility relations, where the additional accessibility relations for groups are given as the intersection of the participating agents, analogously to Def. 9.(ii). Also the additional extension with for any belief group is sound and complete with respect to Kripke structures having serial accessibility relations, analogously to Def. 9.(i). The axioms of justification graph are between these two systems. Def. 9.(iii) explicitly allows belief entities to have more information than its components. Various completeness proofs for combining LTL and epistemic logics are given e.g., in [5].

Extracting Justifications.

Let be a finite set of formulae logically describing the situation which is object of our investigation. Each formula encodes information of belief atoms ( with ), facts ( where does not contain any epistemic modal operator), or is an arbitrary Boolean combinations thereof. Further, let be a justification graph such that is -consistent and be a non-atomic belief entity of . For any formula we may now ask whether is part of the information of . If there is a proof , then is included in ’s information. To extract a justification for we use that is -inconsistent and accordingly unsatisfiable for . If we succeed in extracting a minimal unsatisfiable core a minimal inconsistency proof can be recovered, from which finally the used justifications are extracted.

The following proposition allows to use SAT/SMT-solvers for a restricted setting and has been used in our case study.

Proposition 2 (SAT Reduction).

Let be a set of formulae such that each element is of the form with and does not contain any epistemic modal operators. Further, let be an arbitrary belief entity that does not occur in . Then is a justification graph for if and only if is satisfiable over the non-epistemic fragment of the logic of justification graphs.

Proof.

The satisfiability relation for the non-epistemic fragment is independent of the accessibility relations , and, consequently, also independent of . In particular, is satisfiable if and only if there exists a model and a run such that .

Let be a graph.

Let us first assume that is a justification graph for . Then according to Def. 9 there exists a Kripke structure and a run such that for all . Hence, we have for all with . Furthermore, from item (i) and (iii) of Def. 9 we observe that is not empty and . Hence, there exists at least one run that satisfies all formulae in . Since does not contain epistemic operators, we found a model and a run such that .

For the other direction, let us assume that there exists some model and a run such that . We extend to a Kripke structure by setting for all if and only if for all