Subjective Knowledge and Reasoning about Agents in Multi-Agent Systems

01/22/2020 ∙ by Shikha Singh, et al. ∙ 0

Though a lot of work in multi-agent systems is focused on reasoning about knowledge and beliefs of artificial agents, an explicit representation and reasoning about the presence/absence of agents, especially in the scenarios where agents may be unaware of other agents joining in or going offline in a multi-agent system, leading to partial knowledge/asymmetric knowledge of the agents is mostly overlooked by the MAS community. Such scenarios lay the foundations of cases where an agent can influence other agents' mental states by (mis)informing them about the presence/absence of collaborators or adversaries. In this paper, we investigate how Kripke structure-based epistemic models can be extended to express the above notion based on an agent's subjective knowledge and we discuss the challenges that come along.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The research community has successfully used Kripke models to model epistemic situations in multiagent systems. But it seems necessary to have a broader notion of epistemic models that may not always be defined within the interpretation of the structural properties of S5 or KD45 Kripke models which are used to represent uncertainty (knowledge) of agents in a multi-agent system. We envisage an underlying global epistemic structure in a multi agent system, wherein the agents may have access to (non)overlapping local regions. Hence the agents can be said to possess subjective knowledge about the system. Such a global epistemic structure allows us to talk about the possibility of agents being ignorant of the presence of other agents. While we talk of ignorance, we point out that there are two levels of it as we perceive it: one, where an agent is uncertain whether something is true or false but it is certain that it exists to be known, while the other is at the level of existence itself. We restrict ourself to modeling the first level of ignorance only. The community uses KD45 Kripke structures to model beliefs instead of knowledge but these beliefs are typically about the world and not about the agents. The agents are assumed to be aware of all other agents.

In this paper we explore ways to relax this assumption in the epistemic models based on KD45 Kripke structures. First we motivate the work with an example: Consider a scenario involving multi-vehicle search and rescue. Three (unmanned) vehicles V1, V2, V3 have been assigned to survey a sequence of points looking for survivors. Each vehicle knows whether there are survivors at their respective surveyed points and send updates to each other. This is how they are pre-scripted to collaborate. Meanwhile V1 fails (assuming a failure signal be sent out to others111we proceed with this assumption for the rest of the paper), V2 and V3 have to complete the task on their own. The belief bases of V2 and V3 should get updated so that the strategy to accomplish the remaining task can be recomputed. If on the other hand, more vehicles (which were off-line or disconnected earlier) join the team midway or say V1 is up and running after certain interval of time, the existing ones should be able to ‘understand and update’ its presence in their belief bases to recompute their strategies.
Consider another setting where the system modeler may want the agents to work independently, as if they are in a single agent setting, in the initial phases of the task and collaborate with each other at a later stage. Such scenarios can arise in privacy- preserving use cases. We feel that a formalism that facilitates an artificial agent to explicitly model and reason with its (un)certainty about other agents will be useful in the settings discussed above. We explore other avenues as well where the capability of reasoning about agency can be used by an agent to influence other agents’ beliefs about agency. We discuss this in detail in a separate section dedicated to ontological lies222the term ‘ontological lies’ refers to lies about presence/absence of an agent.

2 Background

Before discussing the scope of extending epistemic logic to support our formalism we briefly look at the basics. Epistemic logic, the logic of knowledge and beliefs [hintikka1962knowledge], is used to reason with an agent’s knowledge about the world as well as its own (and the others’) beliefs about the world, beliefs about beliefs and so on.
Language: Let and be a finite set of propositions and agents respectively. The language can be constructed using the formulae given below:

where and . The intended interpretation of is ‘agent i knows ’. It is the basic system of knowledge known as system whose axiomatization will be discussed after we have looked at its possible worlds semantics using Kripke models.

Semantics: Kripke structures enable the agents to think in terms of possible worlds accessible to them and an agent is said to know or believe some state of affairs to be true, if and only if, it is the true with respect to all the possible worlds accessible to it.

Definition 1 (Kripke Model).

Given the set of propositions and the set of agents , a Kripke model is a triple , where: S is a set of states, is a function, such that for all , there is an accessibility relation defined , is a valuation function for all , the set is the set of states in which p is true.

is called an epistemic model and all the relations in are equivalence relations (which explains the truth and introspective properties, i.e. 5 properties, of knowledge). Epistemic formulas are interpreted on pairs , also called as a pointed model, and a formula is true in is written as . Thus, the satisfaction of a formula can be expressed as: (i) , (ii) , (iii) and (iv) . Here, stands for existence of accessibility relation of agent i between the two epistemic states s and t.

Axiomatisation: The axioms of 5 system of knowledge include all instantiations of propositional tautologies, along with the following axioms:

  • Distribution of over :

  • Modus Ponens: From and , infer

  • Necessitation of : From , infer

  • Truth:

  • Positive Introspection:

  • Negative Introspection:

The first three axioms present a minimal modal logic which captures valid formulas of all Kripke models. This axiomatisation is called modal system . The Truth axiom, also referred as axiom, states that whatever is known must be true. The last two axioms, also denoted by axiom and axiom respectively, express the introspective properties of an agent towards what it knows and what it doesn’t know. The class of Kripke models with equivalence (accessibility) relations is denoted by 5. We would be working with beliefs, and not knowledge, where believed statements need not be true but must be internally consistent. Constraining the Kripke structures with only serial, transitive, and Euclidean relations in , allows us to talk about the beliefs of agents [fagin2004reasoning]. In such a system, the axiom is replaced by axiom (note the replacement of modality with modality) and the rest of the axioms remain the same with a belief modality replacing the knowledge modality , and therefore, it is called as system. In the rest of the paper, we might use the terms knowledge and belief interchangeably but we confine ourselves only to belief as expressed in the system.

3 Proposed Approach

Informal Discussion: When we model knowledge and beliefs in a multi-agent system using a Kripke model, the accessibility relations are defined for each agent for each state in the model to implicitly represent that all agents know the other agents and it is common knowledge. The accessibility relations corresponding to an agent enables us to represent the respective agent’s uncertainty about the real world but not about the agency. For example: Figure  1 illustrates a S5 Kripke structure which captures the agents’ beliefs (or uncertainty) about the truth of proposition p.

Figure 1: An S5 Kripke model

It can be considered as a closed world representation (with respect to agents as well as propositions) given a finite set of propositions and a finite set of agents . Let us assume that the shaded circle is the true world. From an external perspective we see that the model expresses that in the true state m knows that p is false, f is uncertain whether p is true or false but f is certain that m knows whether p. Both the agents are implicitly aware of each other.
We don’t advocate an open world setting by making either P or Ag a non-finite set but we propose that agents should be allowed to expand their epistemic model, as and when updates about new agents joining in or existing ones leaving out, by applying the relations transformation functions we discuss in the following section. In this paper, we restrict ourselves to introducing new agents (and not new facts) in the system. The facts that agent can know (or believe) remain same.

3.1 Formalization

Along the lines of the above definition of epistemic model, to represent an external as well as perspectival view of agents in a multi-agent system, we define some additional terms and notations as discussed below:
Local states corresponding to each agent - is a function such that for all , the set is the set of states that agent i cannot distinguish from the real state in the initial model. It closely resembles the concept of designated states discussed in [bolander2011epistemic]. We assume that the system is initiated with a set of local states for each agent and we investigate how the set corresponding to each agent i evolves as new updates present themselves.
i-reachable states: These are the states which can be reached by an i-edge from any state. We observe that and its 1-hop neighborhood in the Kripke structure defines what i believes in from a subjective perspective.

Then there are j edges emanating from states in this region that defines i’s perception of j’s beliefs.

To express an agent’s uncertainty about the presence of another agent we consider the local states corresponding to the respective agent and their neighborhood and define two modal operators, read as Possibly_an_agent_for_i and read as Certainly_an_agent_for_i. The full set of formulas can be given by the following BNF:

where and

The interpretation of these modalities is defined as follows:

  • there exists atleast one such that .

  • .

We treat as dual of such that they mimic the ‘diamond’ and the ‘box’ operator respectively.

We proceed with the possible worlds semantics of epistemic logic for the language defined above. As in epistemic logic the possible worlds represent an agent’s uncertainty in terms of possible epistemic alternatives, we extend it further to reason with an agent’s uncertainty about the presence of other agents in terms the agents’ accessibility to the shared epistemic alternatives. We expand on this concept with the help of the following example:

Figure 2: A KD45 Kripke model

Consider a KD45 Kripke model, in the figure below, defined over some finite set of propositions, a finite set of agents and a Kripke model where , , , . Let the local states for each agents be given as: , and .
Let and . The epistemic formulas expressed on this model are: , , , etc. The ontological formulas (as we have called them so) expressed on this model are: , , etc.

The interpretation of the above model is that agent m, which can not distinguish between the two epistemic states: 1 and 2 (represented by its local states), is certain of presence of only one agent f and is itself oblivious of presence of another agent g. In both of those epistemic states it believes that the agent f considers a third and only epistemic state possible where it is not only certain of presence of m but also certain of presence of another agent g. Similarly, agent f is certain of presence of agents m as well as g and believes that all three (including itself) are aware of each other. Clearly, if the true state had been , we as an external agent can very well look at the model and tell that m is mistaken in its beliefs about presence of only two agent in the system. Contrary to that, had one of been the true state, we see that there are only two agents in the system which m is aware of and it is also aware of the fact that f is not only imagining an epistemic situation where another agent g is present but also believes that everybody else shares the same belief.
One may argue that the seriality of the relations, say in the above example, vanishes. We emphasize again that the KD45 properties be maintained in a localised manner. Consider which stands for the accessibility relation for m. We don’t use to define m’s subjective knowledge. Instead we use its subset whose domain is restricted to . Similarly the subjective knowledge of f and g is defined using and respectively. Now that we can explicitly express the awareness of agents about other agents, in the following section we introduce operators that can influence the same.

3.2 Ontological updates

In literature, update models from Dynamic Epistemic Logic [van2007dynamic] have been preferably used to formalize reasoning about information change in Kripke models. Another popular approach is using agent language mA+ [baral2015action]. While we agree that the former can be used for modeling epistemic and ontic actions in our proposed setting too, it cannot be used to model the dynamics in agency that we discussed. The latter approach, agent language mA+ is known to be used with Finitary-S5 theories [son2014finitary] and therefore can not be used with arbitrary Kripke structures. To suit our purpose we describe two model transformation operators. We give their semantics using relation changing transition functions which transform one epistemic model to another.

  • Update_offline(j): The updated model reflects that agent j has (permanently) gone offline and other agents have updated their awareness about its absence and their beliefs about its beliefs. The resultant model, , can be constructed from the initial model , in the following manner:

    • Initialize the resultant model by creating a replica of the initial model:

    • The set of agents with respect to the resultant model: is set to

    • The local states of all agents remains the same.

    • Remove such that for all , holds:

Note that we use operator to delete the specified relation from the model. Removal of edges may leave disconnected. The regions that are not reachable from the local states of the agents are discarded.

Figure 3: An example scenario to demonstrate update_offline(v1)

Example: Consider a scenario in Figure  3a. There are three unmanned vehicles . The Kripke model illustrated in the figure333Note: This structure resembles the 3-muddy children puzzle but we do not model the same problem here. Consider it as an epistemic situation that shares similar neighborhood. shows the eight possible worlds (each labeled node having some assignment over a finite set of propositions by the valuation function). The shaded node corresponds to the true world and the edges labeled with the agents represent the uncertainty of the respective agents. The dashed rectangles represent their local states. In this scenario, v1 leaves and the updated model is shown is Figure  3b. We observe that as v1 leaves, the epistemic possibilities that were present due to uncertainty of v1, gets disconnected and are of no relevance now. This component can be discarded.

  • Update_online(j, I(j)): The updated model reflects that agent j has joined the system. Other agents become aware of its presence. Besides that the earlier beliefs of the rest of the agents should remain intact. The model is updated with the local states of j(as specified by the external modeler). The resultant model, , can be constructed from the initial model , in the following manner:

    • Initialize the resultant model by creating a replica of the initial model:

    • The local states of all agents remains the same.

    • The set of agents with respect to the resultant model: is set to

    • Mark the local states (as specified in the update) for j: in .

    • Add an equivalence relation for j on , the set of all possible states: . Now, for all , holds true with respect to their respective s.

    Note that we use operator to add the specified relation to the model.

As long as the agent updates are truthful and commonly known among all the agents, the model size (in terms of number of possible worlds) remain same and number of edges increases (or decreases) because we assume that the new agent cannot distinguish one epistemic state from others. The other agents too are aware of its ignorance. In planning based scenario, the beliefs of new agent (and hence that of the others too) can be further refined using information updates/requests in a goal driven manner. If we try to lift this assumption, say j joins the group with some beliefs then the updated model grows to accommodate the different beliefs of different agents about j’s beliefs. For instance, each existing agent may falsely believe that the new agent shares the same view of the world that it itself has. If all the agents are biased to believe so, then the model expands atleast times.

The updates that we discussed above bring common information for all the existing agents and the resultant model aligns with the true state of the world that the agents may still be uncertain about or yet to discover. But as it is the case with untruthful epistemic updates, that can be exploited to synthesize lies and deceptive plans, ontological updates too can be exploited to synthesize ontological lies. We give a brief account of ontological lies in the following section.

4 Epistemic lies versus Ontological lies

If we take the view of subjective epistemic reasoning, we can define lying as follows. Lying is the communication of something that one does not believe in, usually done with the intention of misleading someone. We observe that there are two very different kinds of lies, requiring different kinds of cognitive machinery. The simpler kind of lie is epistemological. Here the agent merely makes a statement that could have been true, but is not in fact. For example, a bridge player advertising a card she does not hold, or a person telling a habitual borrower that he has no money at hand. The second kind of lie is ontological in which a new category is created; imagined and invented. For example, children are often told that a tooth fairy will come and take away a broken tooth. Though epistemological lies have been investigated by epistemic planning community, we, to the best of our beliefs, are not aware of such pursuits vis-a-vis ontological lies by artificial agents.

Our pursuit of studying ontological lies derives motivation from The Gruffalo [donaldson2016gruffalo], written by Julia Donaldson it is a children’s book featuring the deceit carried out by a clever mouse, the leading character of the story, to safeguard itself from the dangerous predators (a fox, an owl, a snake and finally, a gruffalo, a creature that the mouse thought it was imagining) in a forest. The interesting course of events that happens in the story is given below:444\(https://en.wikipedia.org/wiki/The\_Gruffalo\)
The mouse, while taking a walk in a forest, runs into, one by one in sequence, a fox, an owl, and a snake. Each of these animals, clearly intending to eat the mouse, invite him back to their home for a meal. The cunning mouse declines each offer. To dissuade further advances, he tells each animal that he has plans to dine with his friend, a gruffalo, a monster-like hybrid that is half grizzly bear and half buffalo, whose favorite food happens to be the animal in question, and describes to each the relevant dangerous features of the gruffalo’s monstrous anatomy. Frightened that the gruffalo might eat it, each animal flees. Knowing the gruffalo to be fictional, the mouse gloats thus: Silly old fox/owl/snake, doesn’t he know? there’s no such thing as a gruffalo! After getting rid of the last animal, the mouse is shocked to encounter a real gruffalo - with all the frightening features the mouse thought that he was inventing. The gruffalo threatens to eat the mouse, but again the mouse resorts to imaginative deception. He tells the gruffalo that he, the mouse, is the scariest animal in the forest. Laughing, the gruffalo agrees to follow the mouse as he demonstrates how feared he is. The two walk through the forest, encountering in turn the animals that had earlier menaced the mouse. Each is terrified by the sight of the pair and runs off - and each time the gruffalo becomes more convinced about the fear that the mouse apparently evokes in everyone. Observing the success of his deception, the mouse then threatens to make a meal out of the gruffalo, which flees in haste, leaving the mouse to enjoy its vegetarian diet of a nut in peace.
As studied before [singh2019planning] the first lie the mouse tells is of the latter type, while the second one is of the former type. Ontological lies perhaps require more sophisticated cognitive machinery. Certainly a more imaginative mind. We analyse ontological lies using the setting discussed above.

4.1 Ontological lies: untruthful agent updates

Let and denote the initial and the resultant system respectively. Note that the agent update operators either add or remove accessibility relations and as a result, new epistemic states are also added to or removed from the initial model get the resultant model . For ease of further discussion, we use to denote the set of possible worlds in model , to denote all the relations in model, to refer to the valuation function used and to refer to the set of local states of agent ag in .

4.1.1 Untruthful Update_offline(j) by agent i:

The updated model reflects that agent j has (permanently) gone offline for the misinformed agents but not for i and j, that is, for all , holds where as for , holds. The resultant model is constructed in the following manner:

  • Given the initial model , the resultant model is initialized by creating two replicas of M, say and . corresponds to the true scenario whereas corresponds to epistemic state of the misinformed agents. We use , to denote the states in and corresponding to the state in the initial model . Update the domain of the resultant model .

  • Remove the accessibility relations of the agent j, which is falsely announced to be offline, from the region that reflects the epistemic region of misinformed agents: where .

  • Remove accessibility of misinformed agents from the epistemic region that reflects the true state of affairs. where for all .

  • Update the accessibility relations of all the agents in the resultant model: firstly, for all , . Then add edges to the accessibility relations of misinformed agents such that their reachability goes from the epistemic region that reflects the true state of affairs to the region that reflects the epistemic region of misinformed agents. It is done in the following manner: for all , where

  • Set the local states of the informed agents (i and j), as they were in the previous model , in the epistemic region represented by . But the local states of misinformed agents (i.e. other than i and j) can not lie in this region, they are shifted to the epistemic region represented by which reflects the absence of j. Formally, in the resultant model, , for all , update and for all , .

  • The above steps may leave the updated structure disconnected. Discard the components that are not reachable from the local states of the informed agents. The remaining structure is denoted as and the set and are updated with and respectively.

4.1.2 Untruthful Update_online(j) from by i:

We define similar relations transformation functions to update the model such that all the agents except i believe that agent j joins the group, that is, for all , holds while for i, holds. The resultant model can be constructed in the following manner:

  • Given the initial model , the resultant model is initialized by creating two replicas of M, say and . corresponds to the true scenario whereas corresponds to epistemic state of the misinformed agents. We use , to denote the states in and corresponding to the state in the initial model . Update the domain of the resultant model .

  • Add accessibility of the agent j, which is falsely announced to have come online, to the region that reflects the epistemic region of misinformed agents by using Update_online(j, I(j)) on .

  • Remove accessibility of misinformed agents from the epistemic region that reflects the true state of affairs. where for all .

  • Update the accessibility relations of all the agents in the resultant model: firstly, for all , . Then, add edges to the accessibility relations of misinformed agents such that their reachability goes from the epistemic region that reflects the true state of affairs to the region that reflects the epistemic region of misinformed agents: for all , where .

  • Set the local states of the informed agent (i), as they were in the previous model , in the epistemic region represented by . But the local states of misinformed agents (i.e. other than i) can not lie in this region, they are shifted to the epistemic region represented by which reflects the presence of j. Formally, in the resultant model, , for all , update and for all , .

  • The above steps may leave the updated structure disconnected. Discard the components that are not reachable from the local states of the informed agents. The remaining structure is denoted as and the set and are updated with and respectively.

Figure 4: An example scenario to demonstrate untruthful update_online(g)

Example: In Figure  4 we demonstrate the model transformation from  4a to  4b owing to the false ontological update by the mouse (m) about an imaginary gruffalo (g) to the fox (f) as discussed in the story earlier. The dashed rectangular boxes show their initial and shifted local states in  4a and  4b respectively.

5 Conclusion

In this paper we explore an epistemic modeling technique based on Kripke structures wherein agents may be able to influence other agents’ mental states by (mis)informing them about the presence/absence of other agents. We define two modal operators to express an agent’s certainty about the agency. Then we define model transformation operators which we call ontological updates. We observe that the model grows faster in case of untruthful updates. We also discuss some examples to demonstrate the working of our approach. We feel that these ontological updates can be used in planning based settings in artificial intelligence. The idea of

imagining fictional characters seems to be an in interesting pursuit if studied in an epistemic planning based setting augmented with ontological updates. We hope to explore this avenue in the future. In this formalism, we restrict ourselves to the propositions that stand for world knowledge thereby excluding the agent-specific knowledge. For instance, in the muddy children puzzle each proposition stands for some child being muddy. It is not clear how to maintain and handle these propositions and hence beliefs about them in the setting where the agents may join or leave the system. These are the few issues that the discussed approach does not address but need to be addressed in order for the approach to be useful for solving a wide class of problems.

References