1 Introduction
In multiagent systems, agents have goals to satisfy. Typically, agents cannot reach all their goals by themselves, without any help. Instead, agents need to cooperate with other agents, for example because they need a specific resource to satisfy a goal, or do not have the capability required to perform a task. Moreover, in most realworld settings, we cannot always be certain that agents will carry out their assigned tasks successfully.
The questions then, are: Which agent to cooperate with? Which group of agents to join? The problem of assembling a group of cooperating agents in order for all agents to reach their goals, shared or not, is referred to as coalition formation, and has been on the focus of many recent studies in the area of multiagent systems (e.g., Sichman (1998); Sichman and Conte (2002); Shehory and Kraus (1998); Klusch and Gerber (2002); Boella et al. (2009); Grossi and Turrini (2010); Caire et al. (2008)). This paper introduces a novel contextual reasoning approach to address the problem based on the use of MultiContext Systems (MCS).
MultiContext Systems (MCS) Giunchiglia and Serafini (1994); Ghidini and Giunchiglia (2001); Brewka and Eiter (2007) are logical formalizations of distributed context theories connected through a set of bridge rules, which enable information flow between different contexts. A context can be thought of as a logical theory  a set of axioms and inference rules  that models the knowledge of an agent. Intuitively, MCS can be used to represent any information system that consists of heterogeneous knowledgebased agents including peertopeer systems, distributed ontologies or Ambient Intelligence systems. Several applications have already been developed on top of MCS or other similar formal models of context including (a) the CYC common sense knowledge base Lenat and Guha (1989), (b) contextualized ontology languages, such as Distributed Description Logics Borgida and Serafini (2003) and COWL Bouquet et al. (2003), (c) contextbased agent architectures Parsons et al. (1998); Sabater et al. (2002), and (d) distributed reasoning algorithms for Mobile Social Networks Antoniou et al. (2010) and Ambient Intelligence systems Bikakis et al. (2011).
In this article, we address the question of how to find and evaluate coalitions among agents while taking advantage of the MCS model and algorithms. Specifically, our approach uses two variants of MCS. The first variant, called nonmonotonic MCS Brewka and Eiter (2007), allows us to handle incomplete information and potential conflicts that may arise when integrating information from different sources. The second variant, called possibilistic MCS Jin et al. (2012), is a formalism that enables to model uncertainty in context theories and bridge rules. The main advantages of our approach are: (a) MCS can represent heterogenous multiagent systems, i.e. systems containing agents with different knowledge representation models; (b) bridge rules can represent different kinds of interagent relationships such as dependencies, constraints and conflicting goals; (c) the possibilistic extension of MCS enables modeling uncertainty in the agents’ actions; (d) there are both centralized and distributed algorithms that can be used for computing the potential coalitions.
We formulate our main research question as:

How to find and evaluate coalitions among agents in multiagent systems using MCS tools while taking into consideration the uncertainty around the agents’ actions?
This breaks down into the following three subquestions:

How to formally compute the solution space for coalition formation using the MCS model and algorithms?

How to select the best solution given a set of requirements?

How to compute and evaluate coalitions taking also into account the uncertainty in the agents’ actions?
Our methodology is the following. We start with modeling dependencies among agents using dependence relations as described in Sichman and Conte (2002). We then model the system as a nonmonotonic MCS: each agent is modeled as a context with a knowledge base and an underlying logic and dependence relations are modeled as bridge rules. Third, we use appropriate algorithms to compute MCS equilibria. Each equilibrium corresponds to a different coalition. Finally, given a set of requirements, we show how to select the best solutions. The requirements we consider may be of two kinds. They may be domain related. For example in robotics, power consumption is a key concern that must be carefully dealt with. They may also be system related. For example in multiagent systems, the efficiency and conviviality of the system may be considered. We then extend our approach with features of possibilistic reasoning: We extend the definition of dependence relations with a certainty degree; we then use the model and algorithms of possibilistic MCS to compute the potential coalitions under uncertainty. In this case, we evaluate the different coalitions based on the certainty degree with which each coalition achieves the different goals, using multiple criteria decision making methods.
This article is an extended version of Bikakis and Caire (2014), where we presented our methodology for formalizing multiagent systems and computing coalitions between agents under the perfect world assumption, i.e. actions are always carried out with success by the agents that they have been assigned to. Here, we provide more details about the computation of coalitions and the selection of the best coalition in the perfect world case. We also present new results for the cases that the perfect world assumption does not hold due to uncertainty around the agents’ actions.
The rest of the paper is structured as follows. Section 2 presents background information on dependence networks, coalition formation, nonmonotonic MCS and possibilistic MCS using an example from social networks. Section 3 introduces our main example originated in robotics. Section 4 describes our approach in a setting without uncertainty: how we use MCS to represent agents and their dependencies; how we systematically compute the coalitions; and how we then select the best coalitions with respect to given requirements. Section 5 presents the possibilistic reasoning approach, which takes into account the uncertainty in the agents’ actions. Section 6 presents related research, and Section 7 concludes with a summary and a perspective on future works.
2 Background
2.1 Dependence Networks and Coalition Formation
Our model for dependencies among agents in multiagent systems is based on dependence networks. According to Conte and Sichman Sichman and Demazeau (2001), dependence networks can be used to represent the pattern of relationships that exist between agents, and more specifically, interdependencies among agents goals and actions. They can be used to study emerging social structures such as aggregates of heterogeneous agents. They are based on a social reasoning mechanism, on social dependence and on power Sichman and Conte (2002). Power, in this context, means the ability to fulfill a goal. Multiagent dependence allows one to express a wide range of interdependent situations between agents.
A dependence network consists of a finite set or sets of actors and the relation or relations between them Sichman et al. (1994). Actors can be people or organizations. They are linked together by goals, behaviors and exchanges such as hard currency or information. The structural similarity between dependence networks and directed graphs is such that a dependence network can be represented as a directed graph. Informally, the nodes in the graph represent both the agents themselves, and the actions they have to perform to reach a goal. The directed edges in the graph are labelled with goals, and link agents with actions.
When agents cooperate to achieve some of their goals, they form groups or coalitions. Coalitions are topological aspects of a dependence network. They are indicative of some kind of organization, for example, the cooperation between agents in the dependence network. The coalition is supposed to ensure individual agents a sufficient payoff to motivate them to collaborate. In a coalition, agents coordinate their behaviors to reach their shared or reciprocal goals, as for example described in Sauro (2006); Sichman and Demazeau (2001). All the agents in the coalition somehow benefit from the goals being reached. A coalition can achieve its purpose if its members are cooperative, i.e., if they adopt the goals of the coalition in addition to their own goals.
2.2 MultiContext Systems
MultiContext Systems (MCS) Giunchiglia and Serafini (1994); Ghidini and Giunchiglia (2001); Brewka and Eiter (2007)
has been the main effort to formalize context and contextual reasoning in Artificial Intelligence. We use here the definition of heterogeneous nonmonotonic MCS given in
Brewka and Eiter (2007). The main idea is to allow different logics to be used in different contexts, and to model information flow among contexts via bridge rules.2.2.1 Formalization
According to Brewka and Eiter (2007), a MCS is a set of contexts, each composed of a knowledge base with an underlying logic, and a set of bridge rules. A logic L = (KB, BS, ACC) consists of the following components:

KB is the set of wellformed knowledge bases of L. Each element of KB is a set of formulae.

BS is the set of possible belief sets, where the elements of a belief set is a set of formulae.

ACC: KB is a function describing the semantics of the logic by assigning to each knowledge base a set of acceptable belief sets.
As shown in Brewka and Eiter (2007)
, this definition captures the semantics of many different logics both monotonic, e.g. propositional logic, description logics and modal logics, and nonmonotonic, e.g. default logic, circumscription, defeasible logic and logic programs under the answer set semantics.
A bridge rule refers in its body to other contexts and can thus add information to a context based on what is believed or disbelieved in other contexts. Bridge rules are added to those contexts to which they potentially add new information. Let = (L, , L) be a sequence of logics. An bridge rule over , , is of the form
(1) 
where , , refers to a context, is an element of some belief set of , and refers to the context receiving information . We denote by the belief formula in the head of .
A MCS is a set of contexts , , where L = (KB, BS, ACC) is a logic, a knowledge base, and a set of bridge rules over (L, , L). For each it holds that , meaning that bridge rule heads are compatible with knowledge bases.
A belief state of a MCS is the set of the belief sets of its contexts. Formally, a belief state of is a sequence such that . Intuitively, is derived from the knowledge of each context and the information conveyed through applicable bridge rules. A bridge rule of form (3) is applicable in a belief state iff for : and for : .
Equilibrium semantics selects certain belief states of a MCS as acceptable. Intuitively, an equilibrium is a belief state where each context respects all bridge rules applicable in and accepts . Formally, is an equilibrium of , iff for ,
is a grounded equilibrium of iff for , is an answer set of logic program . For a definite MCS (MCS without default negation in the bridge rules), its unique grounded equilibrium is the collection consisting of the least (with respect to set inclusion) Herbrand model of each context.
In a MCS, even if contexts are locally consistent, their bridge rules may render the whole system inconsistent. This is formally described in Brewka and Eiter (2007) as a lack of an equilibrium. Most methods for inconsistency resolution in MCS are based on the following intuition: a subset of the bridge rules that cause inconsistency must be invalidated and another subset must be unconditionally applied, so that the entire system becomes consistent again (e.g. Eiter et al. (2010a, b)). In Caire et al. (2013) we proposed a different method based on conviviality, a property of multiagent systems.
Example 1
Consider a scholar social network through which software agents, acting on behalf of students or researchers, share information about research articles they find online. Consider three such agents, each one with its own knowledge base and logic exchanging information about a certain article. The three agents can be represented as contexts in a MCS . The knowledge bases of the three contexts are respectively:
is a logic program stating that the article is about sensors and corba, and that articles about corba
that are not classified in
centralizedComputing can be classified in distributedComputing. states in propositional logic that the article is written by profA. is an ontology about computing written in a basic description logic, according to which ubiquitousComputing is a type of ambientComputing. The three agents use bridge rules  to exchange information about articles.With and , the first agent classifies articles about middleware (as described in ) in the category of centralizedComputing, and articles about ambientComputing (as described in ) in distributedComputing. With , the second agent classifies articles about corba in middleware. Finally, with , the third agent classifies articles about sensors, which have been written by profB, in ubiquitousComputing. has one equilibrium:
according to which, the first agent classifies the paper in centralizedComputing, and the second agent classifies it in middleware.
Consider now the case that profB is identified by as a second author of the paper:
Rules and would then become applicable, and as a result would not have an equilibrium; it would therefore be inconsistent. To resolve the conflict, one of the four bridge rules  would have to be invalidated. For example, by invalidating rule , the system would have one equilibrium:
2.2.2 Computational Complexity
Paper Brewka and Eiter (2007) presents an analysis on computational complexity, focusing on MCS with logics that have polysize kernels. A logic has polysize kernels, if there is a mapping , which assigns to every KB and ACC a set of size (written as a string) polynomial in the size of , called the kernel of , such that there is a onetoone correspondence between the belief sets in ACC and their kernels, i.e., . Examples of logics with polysize kernels include propositional logic, default logic, autoepistemic logic and nonmonotonic logic programs. If furthermore, given any knowledge base , an element , and a set of elements , deciding whether (i) for some ACC and (ii) is in , then we say that has kernel reasoning in . For example, default logic and autoepistemic logic have kernel reasoning in .
According to the analysis in Brewka and Eiter (2007), for a finite MCS (where all knowledge bases and sets of bridge rules are finite, and the logics are from an arbitrary but fixed set), where all logics have polysize kernels and kernel reasoning in , deciding whether a literal is in a belief set for some (or each) equilibrium of is in (resp. ).
2.3 Possibilistic reasoning in MCS
Recently, Yin et al. proposed a framework for possibilistic reasoning in MultiContext Systems, which they called possibilistic MCS Jin et al. (2012). This has been so far the only attempt to model uncertainty in MCS. It is based on possibilistic logic Dubois et al. (1994) and possibilistic logic programs Nicolas et al. (2006), which are logicbased frameworks for representing states of partial ignorance using a dual pair of possibility and necessity measures. These frameworks are in turn based on ideas from Zadeh’s possibility theory Zadeh (1978). Below, we first provide some preliminary information on possibilistic logic programs, which will then help us to present possibilistic MCS.
2.3.1 Possibilistic Logic Programs Nicolas et al. (2006)
Possibilistic logic and logic programs use the notion of possibilistic concept, which is denoted by , where denotes its classical counterpart. For example, in possibilistic logic programs, this notion is used in the definitions of possibilistic atoms and possprograms:
Definition 1
Let be a finite set of atoms. A possibilistic atom is , where and .
The classical projection of is the atom and is called the necessity degree of .
Definition 2
A possibilistic normal logic program (or possprogram) is a set of possibilistic rules of the form:
(2) 
where , , and .
In (2), represents the certainty level of the information described by rule . The head of is defined as and its body as , where and . The positive projection of is
(3) 
The classical projection of is the classical rule:
(4) 
If a possprogram does not contain any default negation then is called a definite possprogram. The reduct of a possprogram w.r.t. a set of atoms is the definite possprogram defined as:
(5) 
For a set of atoms and a rule , we say that is applicable in if and . denotes the set of rules in that are applicable in .
is said to be grounded if it can be ordered as a sequence such that
(6) 
Given a possprogram over a set of atoms , the semantics of is defined through possibility distributions on .^{1}^{1}1For more details about the semantics of possprograms, see Nicolas et al. (2006).
2.3.2 Possibilistic MCS Jin et al. (2012)
A possibilistic MCS (or possMCS) is a collection of possibilistic contexts. A possibilistic context is a triple where is a set of atoms, is a possprogram, and is a set of possibilistic bridge rules. A possibilistic bridge rule is defined as follows:
Definition 3
Let be possibilistic contexts. A possibilistic bridge rule for context is of the form
(7) 
where is an atom in and each is an atom in context . .
Intuitively, a rule of form (7) states that information is added to context with necessity degree if, for , is provable in context and for , is not provable in .
By (see equation (3)) we denote the classical projection of . The necessity degree of is denoted by .
Definition 4
A possibilistic MultiContext System, or just possMCS, is a collection of possibilistic contexts , , where each is the set of atoms used in context , is a possprogram on and is a set of possibilistic bridge rules over atom sets .
A possMCS is definite if the possprogram and possibilistic bridge rules of each context are definite.
Definition 5
A possibilistic belief state, is a collection of possibilistic atom sets , where each is a collection of possibilistic atoms and .
We will now describe the semantics for possMCS, starting with definite possMCS. The following definition specifies the possibility distribution of belief states for a given definite possMCS. It uses the notion of satisfiability of a rule , which is based on its applicability w.r.t. a belief state :
(8) 
Definition 6
Let be a definite possMCS and a belief state. The possibility distribution for is defined as:
(9) 
The possibility distribution specifies the degree of compatibility of each belief set with the possMCS . Based on definition 6 we can now define the possibility and necessity of an atom is a belief state .
Definition 7
Let be a definite possMCS and be the possibilistic distribution for . The possibility and necessity of an atom in a belief state are respectively defined as:
(10) 
(11) 
represents the level of consistency of w.r.t. the possMCS , while represents the level at which can be inferred from . For example, whenever an atom belongs to the equilibrium of (the classical projection of ), its possibility is equal to 1.
The semantics for definite possMCS is determined by its unique possibilistic grounded equilibrium.
Definition 8
Let be a definite possMCS. Then the following set of possibilistic atoms is referred to as the possibilistic grounded equilibrium:
(12) 
where for
As proved in Jin et al. (2012) (Proposition 5), the classical projection of is the grounded equilibrium of , where is the classical projection of .
The definition of the semantics for normal possMCS is based on the notion of reduct for normal possMCS, which is in turn based on the definition of rule reduct (see equation (5)):
Definition 9
Let be a normal possMCS and a belief state. The possibilistic reduct of w.r.t. is the possMCS
(13) 
where .
Note that the reduct of relies only on while the reduct of depends on the whole belief state .
Given the notion of reduct for normal possMCS, the equilibrium semantics of normal possMCS is defined as follows:
Definition 10
Let be a normal possMCS and a possibilistic belief state. is a possibilistic equilibrium of if .
Paper Jin et al. (2012) presents also a fixpoint theory for definite possMCS, which provides a way for computing the equilibrium for both definite and normal possMCS.
Example 2
In a different version of example 1 all agents use possibilistic logic programs to encode their knowledge and bridge rules, forming a possibilistic MCS . The three agents are modelled as contexts , and , respectively, with knowledge bases:
and bridge rules:
Rules or facts with degree indicate that the agent is certain about them, while rules with degree less than indicate uncertainty about whether the rule holds.
is a normal possMCS. In order to compute its possibilistic equilibrium, we first have to compute its reduct with respect to , where is the grounded equilibrium of (the classical projection of ):
The reduct of with respect to , , is derived from by replacing with :
The next step is to compute the necessity of each atom in . Following Definition 6, , as is the grounded equilibrium of . For
it holds that , while for
it holds that . Using Definition 7, the necessities of the atoms in are: , , , and . The possibilistic equilibrium of is therefore:
3 Main Example
We now present a scenario to illustrate how our approach works. Consider an office building, where robots assist human workers. As typically, there are not enough office supplies, such as cutters, glue, etc., for everyone, they have to be shared among the workers. Furthermore, as it is considered inefficient and unproductive for a worker to contact other colleagues and get supplies by themselves, the workers can submit requests to the robots to get and/or deliver the needed supplies for them, while they keep on working at their desks. We refer to a request submitted to the robots as a task.
Workers and robots communicate via a simple webbased application, which transmits the workers’ requests to the robots and keeps track of their status. The robots have limited computational resources: they only keep track of their recent past. Furthermore, not all robots know about the exact locations of supplies. Therefore, robots rely on each other for information about the location of the supplies: the last robot having dealt with a supply is the one knowing where it is. We assume the availability of such an application, and a stable and reliable communication network. A depiction of the scenario is presented in Figure 1.
We consider a set of four robots and four tasks: , where is to deliver a pen to desk , is to deliver a piece of paper to desk , is to deliver a tube of glue to desk , and is to deliver a cutter to desk . We assume that a robot can perform a task if it can carry the relevant material and knows its source and destination. Due to their functionalities, robots can carry the following material: the pen or the glue, the paper, the glue or the cutter, and the pen or the cutter. Each robot knows who has the information about the source and the destination of each material, but the actual coordinates are only revealed after an agreement on a coalition among the robots has been made. This involves interdependency among robots.
To start, robots get the information concerning the locations of the supplies and the distances between the material and their destinations. Tables 2 and 2 present the knowledge of the robots about the tasks and the current distances among the robots, the material and the destinations, respectively. The table should be read as follows. Robot , regarding task , knows nothing about the source of the pen, i.e., where it currently is, but does know the destination for the pen, i.e., where it must be delivered. Regarding task , robot knows where the paper is, but knows nothing about its destination.
Robot  
Task  
Source  x  x  
Destination  x  x  x  
Robot  
Task  
Source  x  x  
Destination  x 
Distances among locations  
Robot  Pen  Paper  Glue  Cutter 
10  15  9  12  
14  8  11  13  
12  14  10  7  
9  12  15  11  
Destination  Pen  Paper  Glue  Cutter 
11  16  9  8  
14  7  12  9 
Upon receiving information about the tasks, robots generate plans to carry out the tasks based on their knowledge and capabilities. For example, there are two different plans for delivering the pen to desk (): can deliver it after receiving information about its location from robot ; alternatively, can deliver it after receiving information about its location from and about its destination from . Given the plans, the robots then need to decide how to form coalitions to execute the tasks. We refer to a coalition as a group of robots executing a task. For example to accomplish all tasks , the following two coalitions may be formed:
After forming coalitions, each robot has to generate its own plan to carry out the assigned tasks, e.g. plan the optimal route to get the material and carry it to its destination.
Typically, route planning programs split into two main parts, the representation of the environment and a method of searching possible route paths between the current robot position and some new location, avoiding the obstacles which are known. Hence, mobile robot navigation planning requires to have sufficient reliable estimation of the current location of the robot, and also a precise map of the navigation space. Path planning takes into consideration a model or a map of the environment or contexts, to determine what are the geometric path points for the mobile robots to track from a start position to the goal to be reached.
The most commonly used algorithms for these methods are the algorithm, a global search algorithm giving a complete and optimal global path in static environments, and its optimization the
algorithm. Other examples in the literature include using distributed route planning methods for multiple mobile robots, using the Lagrangian decomposition technique, neural networks
Somhom et al. (1999)Cai and Peng (2002). One of the lessons which has been learned in this research area is that the need for optimal planning is outweighed by the need for quickly finding an appropriate plan Koenig and Likhachev (2005).In this paper our focus is on finding and selecting among the possible coalitions with which a given set of goals will be reached, rather than on the individual plans of the agents to carry out their assigned tasks.
4 Computing and evaluating coalitions in a perfect world
One question that arises in scenarios such as the one that we present in Section 3 is how to compute the alternative coalitions that may be formed to achieve a set of given goals. Here we present a solution based on the use of heterogeneous nonmonotonic MCS Brewka and Eiter (2007), described in Section 2. The main reasons for choosing the MCS model are: (a) it enables representing systems consisting of agents with different knowledge representation models; (b) it can represent different kinds of relationships among agents such as goalbased dependencies, constraints and conflicting goals; and (c) it provides both centralized and distributed reasoning algorithms, which can be used for computing goalbased coalitions. Roughly, our solution consists in representing agent dependencies and interagent constraints using bridge rules and computing the potential coalitions using algorithms for MCS equilibria.
4.1 Modeling dependencies
We model each agent in a multiagent system as a context in a MCS. The knowledge base of the context describes the goals of the agent and the actions that it can perform. Goals and actions are represented as literals of the form , , respectively. Bridge rules represent the dependencies of the agent on other agents to achieve its goals. According to the definition given by Sichman and Demazeau (2001), a dependence relation
denotes that agent depends on agent to achieve goal , because can perform action needed in the plan , which achieves the goal; for a goal of agent , which is achieved through plan , where represents action performed by agent , the following dependence relations hold:
We denote this set of dependencies as . One way to represent dependencies is by using rules of the form: , where the denotes the goal of agent that is to be achieved (), and the describes the actions of plan that will lead to the achievement of the goal. Based on this intuition, we define bridge rules representing dependence relations among agents as follows:
Definition 11
For an agent with goal achieved through plan , the set of dependencies is represented by a bridge rule of the form:
(14) 
where , is the context representing agent .
Based on the above representation of agents as contexts, and goalbased dependencies among agents as bridge rules, we represent multiagent systems as MCS as follows.
Definition 12
A MCS corresponding to a multiagent system is a set of contexts , where L = (KB, BS, ACC) is the logic of agent , is a knowledge base that describes the actions that can perform and its goals, and is a set of bridge rules, a subset of which represents the dependencies of on other agents in for all goals of and all plans , with which these goals can be achieved.
The main advantage of this model is that it enables agents using different logics to describe their actions and goals, to form plans cooperatively by exchanging information through their bridge rules. Assuming a signature , the following are some example logics that are captured by definition 12:

Default logic Reiter (1980): KB is the set of default theories based on ; BS is the set of deductively closed sets of formulas; and ACC() is the set of ’s extensions.

Normal logic programs under answerset semantics Gelfond and Lifschitz (1991): KB is the set of normal logic programs over ; BS is the set of sets of atoms over ; and ACC() is the set of ’s answer sets.

Propositional logic under the closedworld assumption: KB is the set of sets of propositional formulas over ; BS is the set of deductively closed sets of propositional formulas; and ACC() is the set of ’s consequences under the closed world assumption.
There are numerous other examples, both monotonic (e.g., description logics, modal logics, temporal logics), and nonmonotonic (e.g., circumscription, autoepistemic logic, defeasible logic).
This feature (the generality of the representation model) is particularly important in open environments, where agents are typically heterogenous with respect to their representation and reasoning capabilities (e.g. Ambient Intelligence systems).
Example 3
In our main example, introduced in Section 3, we assume that all four robots use propositional logic. We model the four robots, , as contexts , respectively, with the following knowledge bases:
where represents the actions that a robot can perform. stands for the object to be delivered: stands for the pen, for the paper, for the glue and for the cutter. stands for the kind of action that the agent can perform: stands for carrying the object, stands for providing information about the current location (source) of the object, while stands for providing information about the destination of the object. For example, can

provide information about the source of the paper ()

provide information about the destination of the pen ()

provide information about the destination of the glue ()

carry the pen or the glue ()
We represent the four tasks that the robots are requested to perform, , as goals, . For example represents the task of delivering the pen to desk (). We also assume that a robot can fulfil goal , i.e. deliver object to its destination, if it can perform action , i.e. carry object . For example, can be fulfilled by robots and , because these robots can carry the pen ().
Given the knowledge and capabilities of robots, as described in Table 2, the robots can fulfil goals as follows. For , there are two alternative plans:
According to , robot must provide information about the source of the pen () and must carry the pen to its destination (). According to , robot must provide information about the source of the pen (), must provide information about its destination (), and must carry the pen to its destination ().
For there is only one plan, ; for there are two alternative plans: and ; and for there are two plans: and :
Each plan implies dependencies among robots. For example, from the following dependency is derived:
namely depends on to achieve goal , because can provide information about the source of the pen (). Figure 2 represents the dependencies derived from all plans, abstracting from plans, similarly to Sichman and Conte (2002). The figure should be read as follows: The pair of arrows going from node to the rectangle box labeled and then to node indicates that agent depends on agent to achieve goal , because the latter can perform action .
Bridge rules  represent the same dependencies. Each rule represents the dependencies derived by a different plan. For example corresponds to plan and represents dependency .
Comments
There are no comments yet.