Dynamic Conflict Resolution Using Justification Based Reasoning

10/31/2019
by   Werner Damm, et al.
University of Oldenburg
0

We study conflict situations that dynamically arise in traffic scenarios, where different agents try to achieve their set of goals and have to decide on what to do based on their local perception. We distinguish several types of conflicts for this setting. In order to enable modelling of conflict situations and the reasons for conflicts, we present a logical framework that adopts concepts from epistemic and modal logic, justification and temporal logic. Using this framework, we illustrate how conflicts can be identified and how we derive a chain of justifications leading to this conflict. We discuss how conflict resolution can be done when a vehicle has local, incomplete information, vehicle to vehicle communication (V2V) and partially ordered goals.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

05/28/2019

Justification Based Reasoning in Dynamic Conflict Resolution

We study conflict situations that dynamically arise in traffic scenarios...
04/11/2017

Scavenger 0.1: A Theorem Prover Based on Conflict Resolution

This paper introduces Scavenger, the first theorem prover for pure first...
10/01/1996

Mechanisms for Automated Negotiation in State Oriented Domains

This paper lays part of the groundwork for a domain theory of negotiatio...
03/19/2020

On the Detectability of Conflict: a Remote Sensing Study of the Rohingya Conflict

The detection and quantification of conflict through remote sensing moda...
12/15/2010

Data Conflict Resolution Using Trust Mappings

In massively collaborative projects such as scientific or community data...
10/13/2021

Dynamic Conflict Resolution of IoT Services in Smart Homes

We propose a novel conflict resolution framework for IoT services in mul...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

As humans are replaced by autonomous systems, such systems must be able to interact with each other and resolve dynamically arising conflicts. Examples of such conflicts arise when a car wants to enter the highway in dense traffic or simply when a car wants to drive faster than the preceding. Such “conflicts” are pervasive in road traffic and although traffic rules define a jurisdictional frame, the decision, e.g., to give way, is not uniquely determined but influenced by a list of prioritised goals of each system and the personal preferences of its user. If it is impossible to achieve all goals simultaneously, autonomous driving systems (ADSs) have to decide “who” will “sacrifice” what goal in order to decide on their manoeuvres. Matters get even more complicated when we take into account that the ADS has only partial information. It perceives the world via sensors of limited reach and precision. Moreover, measurements can be contradicting. An ADS might use V2V to retrieve more information about the world, but it inevitably has a confined insight to other traffic participants and its environment. Nevertheless, for the acceptance of ADSs, it is imperative to implement conflict resolution mechanisms that take into account the high dimensionality of decision making. These decisions have to be explained and in case of an incident, the system’s decisions have to be accountable.

In this paper we study conflict situations as dynamically occurring in road traffic and develop a formal notion of conflict between two agents. We distinguish several types of conflicts and propose a conflict resolution process where the different kinds of conflicts are resolved in an incremental fashion. This process successively increases the required cooperation and decreases the privacy of the agents, finally negotiating which goals of the two agents have to be sacrificed. We present a logical framework enabling the analysis of conflicts. This framework borrows from epistemic and modal logic in order to accommodate the bookkeeping of evidences used during a decision process. The framework in particular provides a mean to summarise consistent evidences and keep them apart from inconsistent evidences. We hence can, e.g., fuse compatible perceptions into a belief about the world and fuse another set of compatible perceptions to a belief and model decisions that take into account that might contradict . Using the framework we illustrate how conflicts can be explained and algorithmically analysed as required for our conflict resolution process. Finally we report on a small case study using a prototype implementation (employing the Yices SMT solver [15]) of the conflict resolution algorithm. We discuss related work in Sect.  5

. In particular we discuss work regarding the notion of traffic conflict and relate our works with work on the perimeter in game theory

[11] and strategy synthesis for levels of cooperation like [12, 8].

Outline

In Sect.  2 we introduce the types of conflict on a running example and develop a formal notion of conflict between two agents. We elaborate on the logical foundations for modelling and analysing conflicts and the logical framework itself in Sect.  3. We sketch our case study on conflict analysis in Sect.  4 and outline in Sect.  4.2 an algorithm for analysing conflict situations as requested by our resolution protocol and for deriving explanation of the conflict for the resolution. Before drawing the conclusions in Sect.  6, we discuss related work in Sect.  5.

2 Conflict

Already in 1969 in the paper “Violence, Peace and Peace Research” [19] J. Galtung presents his theory of the Conflict Triangle, a framework used in the study of peace and conflict. Following this theory a conflict comprises three aspects: opposing actions, incompatible goals, inconsistent beliefs (regarding the reasons of the conflict, knowledge of the conflict parties,…).

We focus on conflicts that arise dynamically between two agents in road traffic. We develop a characterisation of conflict as a situation where one agent can accomplish its goals with the help of the other, but both agents cannot accomplish all their goals simultaneously and the agents have to decide what to do based on their local beliefs. In Sect.  2.1 we formalise our notion of conflict. For two agents with complete information, we may characterise a conflict as: Agents A and B are in conflict, if A would accomplish its set of goals , if B will do what A requests, while B would accomplish its set of goals , if A will do what B requests, and it is impossible to accomplish the set of goals . A situation where A and B both compete to consume the same resource is thus an example of a conflict situation. Since we study conflicts from the view-point of an agent’s beliefs, we also consider believed conflicts, which can be resolved by sharing information regarding the others observations, strategies or goals. To resolve a conflict we propose a sequence of steps that require an increasing level of cooperation and decreasing level of privacy – the steps require to reveal information or to constrain acting options. Our resolution process defines the following steps: Shared situational awareness Sharing strategies Sharing goals Agreeing on which goals to sacrifice and which strategy to follow Corresponding to to , we introduce different kinds of conflicts on a running example – a two lane highway, where one car, A, is heading towards an obstacle at its lane and at the lane to its left a fast car, B, is approaching from behind (cf. Fig.  1). An agent has a prioritised list of goals (like 1. “collision-freedom”, 2.“changing lane” and 3. “driving fast”). We assume that an agent’s goals are achievable. Figure 1: Car A wants to circumvent the obstacle (grey box). Car B is approaching from behind. An agent A has a set of actions and exists within a world. At a time the world has a certain state. The world “evolves” (changes state) as determined by the chosen actions of the agents within the world and events determined by the environment within the world. The agent perceives the world only via a set of observation predicates, that are predicates whose valuation is determined by an observation of the agent. Without an observation the agent has no (direct) evidence for the valuation of the respective observation predicate.

Example 1.
Let car A want to change lane. It perceives that it is on a two lane highway, the way ahead is free for the next 500 m and B is approaching. Let A perceive B’s speed via radar. That is A makes the observation car B is fast justified by the evidence radar. We annotate this briefly as radar:car B is fast. Further let A derive from lidar data that B is slow – lidar:car B is slow.
In this situation we say agent A has contradicting evidences. Certain evidences can be combined without contradiction and others not. We assume that an agent organises its evidences in maximal consistent sets (i.e., justification graphs of Sect.  3), where each represents a set of possible worlds:
Example 2.
There are possible worlds of A where it is on a two lane highway, the way ahead is free for the next 500 m and B is slowly approaching. Analogously A considers possible worlds where B is fast. The state of the world outside of its sensors’ reach is unconstrained.
Observing the world (for some time), an agent A assesses what it can do to achieve its goals in all possible worlds. That is, A tries to find a strategy that guarantees to achieve its goals in all its possible worlds. A strategy determines at each state the action of the agent – the agent decides for an action based on its believed past. If there is one such strategy for A to accomplish its goals , then A has a (believed) winning strategy for . This strategy might not be winning in the "real" world though, e.g., due to misperceptions.
Example 3.
Let A want to drive slowly and comfortably. A wants to avoid collisions and it assumes that also B wants to avoid collisions. Although A has contradicting evidences on the speed of B and hence believes that it is possible that “B is fast” and also that “B is slow”, it can follow the strategy to stay at its lane and wait until B has passed. This strategy is winning in all of A’s possible worlds.

Even when A has no believed winning strategy, it can have a winning strategy for a subset of possible worlds. Additional information on the state of world might resolve the conflict by eliminating possible worlds. We call such conflicts observation-resolvable conflicts.

Example 4.

Let A want to change lane to circumvent the obstacle. It is happy to change directly after B but only if B is fast. If B is slow, it prefers to change before B passed. Further let A have contradicting evidences on the speed of B. A considers a conflict with B possible in some world and hence has no believed winning strategy. Now it has to resolve its inconsistent beliefs. Let B tell A, it is fast, and A trust B more than its own sensors, then A might update its beliefs by dismissing all worlds where B is slow. Then “changing after B passed” becomes a believed winning strategy.

In case of inconsistent evidences, as above, A has to decide how to update its beliefs. The decision how to update its beliefs will be based on the analysis of justifications (cf. Sect.  3) of (contradicting) evidences. The lidar contradicts the radar and B reports on its speed. Facing the contradiction of evidences justified by lidar and radar A trusts the evidence justified by B.

Let the agents already have exchanged observations and A still have no believed winning strategy. A conflict might be resolved by communicating part of the other agent’s (future) strategy:

Example 5.

Let A want to change lane. It prefers to change directly after B, if B passes A fast. Otherwise, A wants to change in front of B. Let B so far away that B might decelerate, in which case it might slow down so heavily that A would like to change in front of B even if B currently is fast.

Let A believe “B is fast”. Now A has no believed winning strategy, as B might decelerate. According to (C2), information about parts of the agent’s strategies are now communicated. A asks B whether it plans to decelerate. Let B be cooperative and tell A that it will not decelerate. Then A can dismiss all worlds where B slows down and “changing after B passed” becomes a believed winning strategy for A.

Let the two agents have performed steps and , i.e., they exchanged missing observations and strategy parts, and still A has no winning strategy for all possible worlds.

Example 6.

Let now, in contrast to Ex.  5, B not tell A whether it will decelerate. Then step is performed. So A asks B to respect A’s goals. Since A prefers B to be fast and B agrees to adopt A’s goal as its own, A can again dismiss all worlds where B slows down.

Here the conflict is resolved by communicating goals and the agreement to adopt the other’s goals. So an agent’s strategy might change in order to support the other agent. We call this kind of conflicts goal-disclosure-resolvable conflicts.

The above considered conflicts can be resolved by some kind of information exchange between the two agents, so that the sets of an agent’s possible worlds is adapted and in the end all goals of A and of B are achievable in all remaining possible worlds. The price to pay for conflict resolution is that the agents will have to reveal information. Still there are cases where simply not all goals are (believed to be) achievable. In this case A and B have to negotiate which goals shall be accomplished. While some goals may be compatible, other goals are conflicting. We hence consider goal subsets for which a combined winning strategy for A and B exists. We assume that there is a weight assignment function that assigns a value to a given goal combination based on which decision for a certain goal combination is taken. This weighting of goals reflects the relative value of goals for the individual agents. Such a function will have to reflect, e.g., moral, ethics and jurisdiction.

Example 7.

Let A’s and B’s highest priority goal be collision-freedom, reflected in goals and . Further let A want to go fast and change lane immediately . Let also B want to go fast , so that A cannot change immediately. Now in step (C4) A and B negotiate what goals shall be accomplished. In our scenario collision-freedom is valued most, and B’s goals get priority over A’s, since B is on the fast lane. Hence our resolution is to agree on a strategy accomplishing , which is the set of goals having the highest value among all those for which a combined winning strategy exists.

Note that additional agents are captured as part of the environment here. At each step an agent can also decide to negotiate with some other agent than B in order to resolve its conflict.

2.1 Formal Notions

In the following we introduce basic notions to define a conflict. Conflicts, as introduced above, arise in a wide variety of system models, but we consider in this paper only a propositional setting.

Let ,…, , and be functions. We will write if and only if for all . Note that for any given as above the decomposition into its components is uniquely determined by the projections of onto the corresponding codomain.

Each agent A has a set of actions . The sets of actions of two agents are disjoint. To formally define a (possible) world model of an agent A, let be a set of states and be a set of propositional variables. A believes at a state that a subset of is true and is false. A (possible) world model for an agent A is a transition system over with designated initial state and current state, all states are labelled with the belief propositions that hold at that state and transitions labeled with actions where is an action of A, an action of B and an action of the environment. The set of actions of an agent includes send and receive actions via which information can be exchanged, the environment guarantees to transmit a send message to the respective receiver. Formally a possible world is with , , and representing the initial state and the current state respectively. We use to encode the current believes on the present, past and about the possible futures. If an agent, let us say A wlog, follows a strategy, it decides for an action based on its believed past, i.e., a strategy for A is a function . Given strategies for A, for B, is a common strategy of A and B, which chooses the actions of A according to and the actions of B according to .

A (believed) run is an infinite sequence of states starting with the initial state and for all . A run results from a strategy in A’s world , denoted as , if and only if for all . Given a set of possible worlds for A, we use to denote the set of runs that result from in a , .

We use linear-time temporal logic (LTL) to specify goals. For a run r and a goal (or a conjunction of goals) we write , if the valuation of propositions along r satisfies 111cf. Def.  9. We say is a (believed) winning strategy for in , if for all it holds that . An agent A has a set of goals and a weight assignment function that assigns values to a given goal combination. We write as shorthand for . We say subgoal is maximal for r if and for all with . true is the empty subgoal. We say is maximal for a set R of runs if for all and for all with for all , .

There is one “special” world model that represents the ground truth, i.e., it reflects how the reality evolves. We refer the interested reader to [13] for a more elaborate presentation of our concept of reality and associated beliefs. Agent A considers several worlds possible at a time. At each state of the real world A has a set of possible worlds and for each world a believed current state and beliefs on the goals of B, and the goal weight assignment function of B, . A possible world is labeled with the set of evidences that justifies that the world is regarded as possible. The real world changes states according to the actions of A, B and . The set of possible worlds changes to due to the believed passing of time and due to belief updates triggered by e.g. observations. For the scope of this paper though, we do not consider the actual passing of time, but study the conflict analysis at a single state of the real world.

We say that A has a (believed) winning strategy for at the real world state if is a winning strategy for in all possible worlds .

Definition 1 (Believed Possible Conflict).

Let be the set of maximal subgoals of A at state for which a believed winning strategy in exists.

Agent A believes at state it is in a possible conflict with B, if for each of its winning strategies for a maximal subgoal ,

  • there is a strategy and a possible world such that is a winning strategy in for , a believed maximal subgoal of the believed goals of B in .

  • but is not a winning strategy for in .

In Def. 1 is the set of maximal subgoals that A can achieve in all possible worlds with the help of B. A believes that B might decide for a strategy to accomplish some of its maximal subgoals and B takes this decision wrt A’s possible worlds. Note that A assuming the goals of B being true means that A has to deal with arbitrary behaviour of B. Also note that B always has a winning strategy in every since is maximal wrt . If A cannot find one winning strategy that fits all possible choices of B then A believes that it is in a conflict with B. Note that A analyses the conflict within its possible worlds and in particular it beliefs that B believes that in one of its possible wolds. That A has got beliefs about (deviations of) the beliefs of B is an interesting future extension.

3 Epistemic Logic, Justifications and Justification Graph

Conflict analysis demands to know who believes to be in conflict with whom and what pieces of information made him belief that he is in conflict. To this end we introduce the logic of justification graphs that allows to keep track of external information and extends purely propositional formulae by so called belief atoms (cf. p. 3.1), which are used to label the sources of information. In Sect.  2 we already used such formulae, e.g., "radar:car B is fast". Our logic provides several atomic accessibility relations representing justified beliefs of various sources, as required for our examples of Sect.  2. It provides justification graphs as a mean to identify belief entities which compose different justifications to consistent information even when the information base contains contradicting information of different sources, as required for analysing conflict situations.

First, this section provides a short overview on epistemic modal logics and multi-modal extensions thereof. Such logics use modal operators to expressing knowledge and belief stemming from different sources. Often we will refer to this knowledge and belief as information, especially when focusing on the sources or of the information. Thereupon the basic principles of justifications logics are shortly reviewed. Justification logics are widely seen as interesting variants to epistemic logics as they allow to trace back intra-logical and external justifications of derived information. In the following discussion it turns out that tracing back external justifications follows the same principles as the distribution of information over different sources.

Consequently, the concept of information source and external justification are then unified in our variant of an epistemic modal logic. This logic of justification graphs extends the modal logic by a justification graph. The nodes of a justification graph are called belief entities and represent groups of consistent information. The leaf nodes of a justification graphs are called belief atoms, which are information source and external justifications at the same time, as they are the least constituents of external information. We provide a complete axiomatisation with respect to the semantics of the logic of justification graphs.

3.1 Justification Graphs

Modal Logic and Epistemic Logic

Modal logic extends the classical logic by modal operators expressing necessity and possibility. The formula is read as is necessary” and is read as is possible”. The notions of possibility and necessity are dual to each other, can be defined as . The weakest modal logic extends propositional logic by the axiom and the necessitation rule as follows

(K)
(Nec)

The axiom ensures that whenever and necessarily hold, then also necessarily has to hold. The necessitation rule allows to infer the necessity of from any proof of and, hence, pushes any derivable logical truth into the range of the modal operator . This principle is also known as logical awareness. Various modalities like belief or knowledge can be described by adding additional axioms encoding the characteristic properties of the respective modal operator. The following two axioms are useful to model knowledge and belief:
(T)
(D)

The axiom and relate necessity with the factual world. While the truth axiom characterises knowledge as it postulates that everything which is necessary is also factual, characterises belief as it postulates the weaker property that everything which is necessary is also possible. Under both axioms holds, i.e. a necessary contradiction yields also a factual contradiction.

Multi-modal logics are easily obtained by adding several modal operators with possibly different properties and can be used to express the information of more than one agent. E.g., the formula expresses that the piece of information belongs to the modality . Modal operators can also be used to represent modalities referring to time. E.g., in the formula the temporal modality expresses that will hold in the next time step. An important representative of a temporal extension is linear temporal logic ().

In multi-agent logics the notions of common information and distributed information play an important role. While common knowledge captures the information which is known to every agent , we are mainly interested in information that is distributed within a group of agents . The distributed information within a group contains any piece of information that at least one of the agent , …, has. Consequently, we introduce a set-like notion for groups, where an agent is identified with the singleton group and the expression is used to denote that is distributed information within the group . The distribution of information is axiomatised by

(Dist)

Note that groups may not be empty. The modal logic for distributed information contains for every group at least the axiom , the necessitation rule , and the axiom for any group with .

Justification Logics

Justification logics [6] are variants of epistemic modal logics where the modal operators of knowledge and belief are unfolded into justification terms. Hence, justification logics allow a complete realisation of Plato’s characterisation of knowledge as justified true belief. A typical formula of justification logic has the form , where is a justification term built from justification constants, and it is read as “ is justified by ”. The basic justification logic results from extending propositional logic by the application axiom and the sum axioms

(Appl)
(Sum)

where , , , , and are justification terms which are assembled from justification constants using the operators and according to the axioms. Justification logics tie the epistemic tradition together with proof theory. Justification terms are reasonable abstractions for constructions of proofs. If is a proof of and is a proof of then the application axiom postulates that there is a common proof, namely , for . Moreover, if we have a proof for and some proof then the concatenations of both proofs, and , are still proofs for . In our framework we were not able to derive any meaningful example using the sum axiom of justification logic. Therefore this axiom is omitted in the following discussion.

Discussion

All instances of classical logical tautologies, like and , are provable in justification logics. But in contrast to modal logics, justification logics do not have a necessitation rule. The lack of the necessitation rule allows justification logics to break the principle of logical awareness, as is not necessarily provable for an arbitrary justification term . Certainly, restricting the principle of logical awareness is attractive to provide a realistic model of restricted logical resources. Since we are mainly interested in revealing and resolving conflicts, the principle of logical awareness is indispensable in our approach.

Nevertheless, justification logic can simulate unrestricted logical awareness by adding proper axiom internalisation rules for all axioms and justification constants . In such systems a weak variant of the necessitation rule of modal logic holds: for any derivation there exists a justification term such that holds. Since was derived using axioms and rules only, also the justification term is exclusively built from justification constants dedicated to the involved axioms. Beyond that, is hardly informative as it does not help to reveal external causes of a conflict. Hence, we omit the axiom internalisation rule and add the modal axiom and the modal necessitation rule for any justification term to obtain a justification logic where each justification term is closed under unrestricted logical awareness.

An important consequence of the proposed system is that becomes virtually idempotent and commutative.222For any instance of there is an instance of in the proposed system. Moreover, it is an easy exercise to show that any instance of is derivable in the proposed system. These insights allows us to argue merely about justification groups instead of justification terms. It turns out that a proper reformulation of with regard to justification groups is equivalent to , finally yielding the same axiomatisation for distributed information and compound justifications.

Belief Atoms, Belief Groups, and Belief Entities

So far, we argued that assembling distributed information and compound justifications follow the same principle. In the following we even provide a unified concept for the building blocks of both notions. A belief atom is the least constituent of external information in our logic. To each we assign the modal operator . Hence, for any formula also is a formula saying has information . Belief atoms play different roles in our setting. A belief atom may represent a sensor collecting information about the state of the world, or it may represent certain operational rules as well as a certain goal of the system. The characteristic property of a belief atom is that the information of a belief atom has to be accepted or rejected as a whole. Due to its external and indivisible nature, is the only source of evidence for its information. The only justification for information of is itself. Consequently, can also be read as is the justification for . This is what belief atoms and justifications have in common: either we trust a justification or not.

The information of a system is distributed among its belief atoms. The modal logic for distributed information allows us to consider the information which is distributed over a belief group. While belief groups can be built arbitrarily from belief atoms, we also introduce the concept of belief entities. A belief entity is either a belief atom, or a distinguished group of belief entities. Belief entities are dynamically distinguished by a justification graph. In contrast to belief groups, belief entities and belief atoms are not allowed to have inconsistent information. Hence a justification graph allows us to restrict the awareness of extra-logical evidences – so we can distinctively integrate logical resources that have to be consistent.

Justification Graphs

Let be a set of propositional variables and let be the set of belief entities. The designated subset of denotes the set of belief atoms.

Definition 2 (Language of Justification Graphs).

A formula is in the language of justification graphs if and only if is built according to the following BNF, where and :

Using the descending sequence of operator precedences (, , , , , ), we can define the well-known logical connectives , , and from and . Often, we omit brackets if the formula is still uniquely readable. We define to be right associative. For singleton sets we also write instead of . The language allows the usage of temporal operators for next time (), previous time (), until (), and since (). Operators like always in the future () or always in the past () can be defined from the given ones.

Definition 3 (Justification Graph).

A justification graph is a directed acyclic graph whose nodes are belief entities of . An edge denotes that the belief entity has the component . The set of all direct components of an entity is defined as .

The leaf nodes of a justification graph are populated by belief atoms, i.e. for any belief entity it holds if and only if .

Definition 4 (Axioms of a Justification Graph).

Let be a justification graph. The logic of a justification graph has the following axioms and rules.

  1. As an extension of propositional logic the rule of modus ponens has to hold: from and conclude . Any substitution instance of a propositional tautology is an axiom.

  2. Belief groups are closed under logical consequence and follow the principle of logical awareness. Information is freely distributed along the subgroup-relation. For any belief group the axiom and the necessitation rule hold. For groups and with the axiom holds.

  3. Belief entities are not allowed to have inconsistent information. Non-atomic belief entities inherit all information of their components. For any belief entity the axiom holds. If is a subgroup of the components of , then the axiom holds.

  4. In order to express temporal relation the logic for the justification graph includes the axioms of Past-LTL (LTL with past operator). A comprehensive list of axioms can be found in [25].

  5. Information of a belief entity and time are related. The axiom ensures that every belief entity correctly remembers its prior beliefs and establishes a principle which is also known as perfect recall (e.g., see [18]).

Definition 5 (Proof).

Let be a justification graph. A proof (derivation) of in is a sequence of formulae with such that each is either an axiom of the justification graph or is obtained by applying a rule to previous members with . We will write if and only if such a sequence exists.

Definition 6 (Proof from a set of formulae).

Let be a justification graph and be a set of formulae. The relation holds if and only if for some finite subset with .

Definition 7 (Consistency with respect to a justification graph).

Let be a justification graph.

  1. A set of formulae is -inconsistent if and only if . Otherwise, is -consistent. A formula is -inconsistent if and only if is -inconsistent. Otherwise, is -consistent.

  2. A set of formulae is maximally -consistent if and only if is -consistent and for all the set is -inconsistent.

Semantics

Let be the state space, that is the set of all possible states of the world. An interpretation over is a mapping that maps each state to a truth assignment over , i.e.  is the subset of all propositional variables which are true in the state . In Sect.  2 we introduced world models and runs on world models. There a world model captured the evolution of a states in time.

In this section we focus an the epistemic notions of knowledge and belief and therefore our main concern is the accessibility relation of information. We hence presume that the set of runs of a possible world of Sect.  2 is given, that then defines the evolution in time. Formally a run over is a function from the natural numbers (the time domain) to . The set of all runs is denoted by .

Definition 8.

Let be a justification graph. A Kripke structure for is a tuple where

  1. is a state space,

  2. is the set of all runs over ,

  3. is an interpretation over ,

  4. each in is an individual accessibility relation for a belief entity in .

Definition 9 (Model for a Justification Graph).

Let be a Kripke structure for the justification graph , where

  1. is a serial relation for any belief entity ,

  2. is defined as for any belief group ,

  3. holds for all non-atomic belief entities and any subgroup .

We recursively define the model relation as follows:

When holds, we call a pointed model of for . If is a pointed model of for , then we write and say that the run satisfies . Finally, we say that is satisfiable for , denoted by if and only if there exists a model and a run such that holds.

Proposition 1 (Soundness and Completeness).

The logic of a justification graph is a sound and complete axiomatisation with respect to the model relation . That is, a formula is -consistent if and only if is satisfiable for .

While the soundness proof is straightforward, a self-contained completeness proof involve lengthy sequences of various model constructions and is far beyond the page limit. However, it is well-known, (e.g., [20]), that , the -agent extension of with distributive information is a sound and complete axiomatisation with respect to the class of Kripke structures having arbitrary accessibility relations, where the additional accessibility relations for groups are given as the intersection of the participating agents, analogously to Def. 9.(ii). Also the additional extension with for any belief group is sound and complete with respect to Kripke structures having serial accessibility relations, analogously to Def. 9.(i). The axioms of justification graph are between these two systems. Def. 9.(iii) explicitly allows belief entities to have more information than its components. Various completeness proofs for combining LTL and epistemic logics are given e.g., in [18].

Extracting Justifications

Let be a finite set of formulae logically describing the situation which is object of our investigation. Each formula encodes information of belief atoms ( with ), facts ( where does not contain any epistemic modal operator), or is an arbitrary Boolean combinations thereof. Further, let be a justification graph such that is -consistent and be a non-atomic belief entity of . For any formula we may now ask whether is part of the information of . If there is a proof , then is included in ’s information. To extract a justification for we use that is -inconsistent and accordingly unsatisfiable for . If we succeed in extracting a minimal unsatisfiable core a minimal inconsistency proof can be recovered, from which finally the used justifications are extracted.

The following proposition allows to use SAT/SMT-solvers for a restricted setting and has been used in our case study.

Proposition 2 (SAT Reduction).

Let be a set of formulae such that each element is of the form with and does not contain any epistemic modal operators. Further, let be an arbitrary belief entity that does not occur in . Then is a justification graph for if and only if is satisfiable over the non-epistemic fragment of the logic of justification graphs.

In order to proof the proposition one shows that any model of in the non-epistemic fragment can be extended to a model of for the given by adding trivial accessibility relations. On the other hand, for any model and run with there exists a run which is accessible from such that . Since does not contain any epistemic modal operators, dropping the accessibility relations from still yields a model of . A more detailed version of this proof can be found in [13].

4 Identifying and Analysing Conflicts

In this section we first present an abstract algorithm for the conflict resolution of Sect.  2 that starts at level and proceeds resolution stepwise up to level . We then sketch our small case study where we applied an implementation of the abstract algorithm.

4.1 Analysing Conflicts

For the analysis of conflicts we employ SMT solvers. Prop.  2 reduces the satisfiability of a justification graph to a SAT problem. To employ SMT solving for conflict analysis, we encode the (real and possible) worlds of Sect.  2 via logic formulae as introduced in Sect.  3. Each state is represented as a conjunction of literals, . Introducing a dedicated propositional variable for each and time step allows us to obtain a formula describing a finite run on . A predicate of the form encodes the transition relation T. The effect of performing an action at state is captured by a formula of the form . Using this we can encode a strategy in a formula such that its valuations represent runs of according to . All runs according to achieve goals if and only if is unsatisfiable. These logical encodings are the main ingredients for using a SAT solver for our conflict analysis. Since there are only finitely many possible strategies, we examine for each strategy which goals can be (maximally) achieved in a world or in a set of worlds . Likewise we check whether A has a winning strategy that is compatible with the strategies A believes B might choose.

Since we iterate over all possible worlds for our conflict analysis, we are interested in summarising possible worlds. We are usually not interested in all – e.g. the speed of B may at times be irrelevant. We are hence free to ignore differences in in different possible worlds and are even free to consider all valuations of , even if A does not consider them possible. This insight leads us to a symbolic representation of the possible worlds, collecting the relevant constraints. Now the justification graph groups the constraints that are relevant, with other words, and will not be components of the same justification graph if the valuation of is relevant. In the following we hence consider the maximal consistent set of possible worlds, meaning encodings of possible worlds that are uncontradictory wrt. the relevant propositions, which are specified via the justification graph.

4.2 Algorithmic approach

In this section, we sketch an abstract algorithm for the conflict resolution at levels to as in Sect.  2. Note that we do not aim with Alg.  1 for efficiency or optimal solutions but aim to illustrate how satisfiability checks can be employed to analyse our conflicts.

1:function FindStrategy()
2:      construct set of possible worlds
3:      construct
4:      set of conflict causes
5:     for all  with  do
6:          cf. Alg.  2
7:         if  then is not winning for all , i.e. 
8:               memorize justifications
9:                             
10:     if  then is in conflict with
11:         for  do traverse resolution levels
12:               cf. Alg.  3
13:              if  then new information generated
14:                   new attempt with new information
15:                  if  then new attempt was successful, stop and return
16:                       break                                               
17:     return select to reach some goal in
Algorithm 1 Determining winning strategy based on observations, goals, and possible actions.

The following algorithms describe how we deal with logic formulae encoding sets of possible worlds, sets of runs on them, etc. to analyse conflicts (cf. Def.  1, def:conflict) via SMT solving. We use to refer to a formula that encodes a maximal consistent set of possible worlds (cf. Sect.  4.1), i.e., that corresponds to a justification graph. We use to refer to a set of formulas that encode the set of possible worlds structured into sets of possible worlds via justification graphs. We use and synonymously. Also we often do not distinguish between and – neglecting that represents a set of worlds that are like wrt to the relevant constraints.

Figure 2: Abstract resolution process with information base, possible worlds and strategies.

Fig.  2 provides an overview of the relation between the initial information base of agent , its set of possible worlds , winning strategies, resolution, and stepwise update of the information during our conflict resolution process. The initial information base defines the set of possible worlds , which is organised in sets of maximal consistent worlds . Based on , ’s set of strategies is checked whether it comprises a winning strategies in presence of an agent that tries to achieve its own goals. If no such winning strategy exists, believes to be in conflict with . At each level the resolution procedure tries to determine information of level to resolve the conflict. If the possible worlds are enriched by this information, the considered conflict vanishes. The new information is added to the existing information base and the over-all process is re-started again until either winning strategies are found or is empty.

How to find a believed winning strategy

Alg.  1 finds a winning strategy of agent for a goal in tolerating that follows an arbitrary winning strategy in for its goals, i.e. it finds a strategy that satisfies in all possible worlds where is maximal for and in each possible world agent may also follow a winning strategy for one of its maximal goals . If such a strategy cannot be found, believes to be in conflict with (cf. Def.  1).

Input for the algorithm is (i) a set of formulae describing the current belief of , e.g. its current observations and its history of beliefs –we call it the information base in the sequel–, (ii) a set of goals of that is maximal in , (iii) a set of believed goals of that is maximal for a , (iv) a set of possible actions for and (v) a set of believed possible actions for .

First (L. 2 of Alg.  1) is to construct sets of maximal consistent sets of possible worlds that together represent . In L. 3 the set is determined, which is the set of strategies accomplishing a maximal goal combination for assuming agrees to help, i.e., all winning strategies that satisfy in all possible worlds , where is maximal for .

1:function TestIfNotWinning()
2:     for all  do
3:          construct
4:         for all  do
5:              for all  do
6:                  if  then is not winning for all and all
7:                       return
8:                  else
9:                       return                                               
Algorithm 2 Test if a strategy is winning in all possible worlds.

In lines 5 ff. we examine whether one of ’s strategies (where is willing to help) works even when follows its strategy to achieve one of its maximal goals in .

To this end TestIfNotWinning is called for all of ’s winning strategies (L. 6). The function TestIfNotWinning performs this test iteratively for one maximal consistent set of worlds  (Alg.  2 L. 2). Let be the set of joint strategies achieving a goal that is maximal in . We check the compatibility of ’s strategy to every (Alg.  2 L. 3). A strategy of is compatible to all of ’s strategies if all joint strategies achieve the maximal goals for and (Alg.  2 L. 6).333Note that according to Sect.  2.1, we have if cannot achieve any goal. This reflects that cannot make any assumption about ’s behaviour in such a situation. If the joint strategy is not a winning strategy for the joint goal (Alg.  2 L. 6), the function GetJustifications extracts the set of justifications for this conflict situation (Alg.  2 L. 7). The set of justifications is added to the set of conflict causes (Alg.  1 L. 8.). Since strategy is not compatible to all of ’s strategies, it is hence not further considered as a possible conflict-free strategy for (Alg.  1 L. 9).

1:function FixConflict(