An Argumentation-based Approach for Identifying and Dealing with Incompatibilities among Procedural Goals

09/11/2020
by   Mariela Morveli-Espinoza, et al.
Umeå universitet
CSIC
UTFPR
0

During the first step of practical reasoning, i.e. deliberation, an intelligent agent generates a set of pursuable goals and then selects which of them he commits to achieve. An intelligent agent may in general generate multiple pursuable goals, which may be incompatible among them. In this paper, we focus on the definition, identification and resolution of these incompatibilities. The suggested approach considers the three forms of incompatibility introduced by Castelfranchi and Paglieri, namely the terminal incompatibility, the instrumental or resources incompatibility and the superfluity. We characterise computationally these forms of incompatibility by means of arguments that represent the plans that allow an agent to achieve his goals. Thus, the incompatibility among goals is defined based on the conflicts among their plans, which are represented by means of attacks in an argumentation framework. We also work on the problem of goals selection; we propose to use abstract argumentation theory to deal with this problem, i.e. by applying argumentation semantics. We use a modified version of the "cleaner world" scenario in order to illustrate the performance of our proposal.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

09/13/2020

Resolving Resource Incompatibilities in Intelligent Agents

An intelligent agent may in general pursue multiple procedural goals sim...
09/17/2020

Dealing with Incompatibilities among Procedural Goals under Uncertainty

By considering rational agents, we focus on the problem of selecting goa...
04/11/2022

Value-based Practical Reasoning: Modal Logic + Argumentation

Autonomous agents are supposed to be able to finish tasks or achieve goa...
07/14/2012

An Approach to Model Interest for Planetary Rover through Dezert-Smarandache Theory

In this paper, we propose an approach for assigning an interest level to...
09/13/2020

Argumentation-based Agents that Explain their Decisions

Explainable Artificial Intelligence (XAI) systems, including intelligent...
09/09/2019

Formulating Manipulable Argumentation with Intra-/Inter-Agent Preferences

From marketing to politics, exploitation of incomplete information throu...
01/23/2020

Numerical Abstract Persuasion Argumentation for Expressing Concurrent Multi-Agent Negotiations

A negotiation process by 2 agents e1 and e2 can be interleaved by anothe...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Practical reasoning means reasoning directed towards actions, i.e. it is the process of figuring out what to do. According to [1], practical reasoning involves two phases: (i) deliberation, which is concerned with deciding what state of affairs an agent wants to achieve, thus, the outputs of deliberation phase are goals the agent intends to pursue, and (ii) means-ends reasoning, which is concerned with deciding how to achieve these states of affairs, thus, the outputs of means-ends reasoning are plans. The first phase is also decomposed in two parts: (i) firstly, the agent generates a set of possible goals, which we call pursuable goals111Pursuable goals are also known as desires and pursued goals as intentions. In this work, we consider that both are goals at different stages of processing., and (ii) secondly, the agent chooses which goals he will be committed to bring about.

This paper222This article is an extended version of the extended abstract originally presented at the 16th Conference on Autonomous Agents and MultiAgent Systems, AAMAS’17 [2]. focuses on the process of goal selection; i.e. on deciding which consistent set of goals the agent will pursue. We specifically deal with procedural goals333A goal is called procedural when there is a set of plans for achieving it. This differs from declarative ones, which are a description of the state sought [3]. Other authors refer to this type of goal as achievement goal [4][5]. and consider that the agents have limited resources, in other words, we work with resource-bounded agents.

Given that an intelligent agent may generate multiple pursuable goals, some incompatibilities among these goals could arise, in the sense that it is not possible to pursue them simultaneously. Thus, a rational agent should not simultaneously pursue a goal and a goal if prevents the achievement of , in other words, if they are inconsistent [3]. Reasons for not pursuing some goals simultaneously are generally related to the fact that plans for reaching such goals may block each other [6].

For a better illustration of the problem, let us present the following scenario. It is based on the well-know “cleaner world” scenario, where a set of robots have the task of cleaning the dirt of an environment. The environment is divided into numbered zones referenced by using ordered pairs, which facilitates the communication among the robots. The main goal of all the robots is to have the environment clean. There can be two kinds of dirt, the liquid ones and the solid ones, when it is liquid dirt, the agent mops it and when it is solid dirt, the agent picks it up. All of the robots are equipped with a trash can with limited capacity and have the ability of cleaning both kinds of dirt. However, depending on the distance between the agent and the dirt a robot can or cannot recognize the kind of dirt. In the environment, there is also a workshop, where robots can go to be fixed or to recharge their batteries, and a waste bin, where robots carry the trash they have picked up. During the execution of this task the agents may generate several pursuable goals, which can originate some conflicts or incompatibilities among them.

According to Castelfranchi and Paglieri [7], three forms of incompatibility could emerge: terminal, instrumental and superfluity. Let’s see an example of each kind of incompatibility based on our scenario:

  • Terminal incompatibility

    : Suppose that at a given moment one of the robots – let us call him

    – detects dirt in slot (3,4); hence, the goal “cleaning slot (3,4)” becomes pursuable. On the other hand, also detects a minor technical defect in his antenna; hence, the goal “going to the workshop to be fixed” also becomes pursuable. cannot pursue both goals at the same time because the plans adopted for each goal lead to an inconsistency, since he needs to be operative to clean slot (3,4) or become non-operative to go to the workshop to be fixed.

  • Instrumental or resource incompatibility: It arises because the agents have limited resources. Suppose that is in slot (1,4) and detects two dirty slots, slot (3,4) and slot (4,1). Therefore, goals “cleaning slot (3,4)” and “cleaning slot (4,1)” become pursuable; however, he only has battery for executing the plan of one of the goals. Consequently a conflict due to resource battery arises, and has to choose which slot to clean.

  • Superfluity: It occurs when the agent pursues two goals that lead to the same end. Suppose that is in slot (1,4). On one hand, he detects dirt in slot (5,5), since it is far from its location, he cannot identify the kind of dirt; hence, the goal “cleaning slot (5,5)” becomespursuable. On the other hand, another cleaner robot – let us call him – also detects the same dirty slot and he also notes that it is liquid dirt; however, ’s battery is low and he is not able to do the task, whereby he sends a message to to mop slot (5,5); hence, the goal “mopping slot (5,5)” becomes pursuable. It is easy to notice that both goals have the same end, which is that slot (5,5) be cleaned, with the difference that “mopping slot (5,5)” is a more specific goal.

From these simple examples, one can observe that goals conflict emerge quite easily; hence, if conflicts arise an agent should be able to choose which goals he will pursue, in other words, he should be able to deal with such conflicts or incompatibilities. Besides, BDI-based agent444BDI is the acronym for Belief-Desire-Intention model [8] [9]. programming languages should allow agent programmers to implement agents that do not pursue incompatible goals simultaneously, and that can choose from possibly incompatible goals [10]. Therefore, the study of the possible forms of incompatibilities among goals will benefit both the theoretical research and the practical applications.

The notion of conflicts is not something novel. Some researchers have focused on both the detection and the resolution of conflicts, other ones only on the conflicts detection, and others on the resolution of emerging conflicts. Thangarajah is one of the authors that has worked widely on this problem. In [11], he and his partners propose a general framework for detection and resolution of conflicts; in [12], they focus on detecting and dealing with a special kind of conflict; and finally, in [13], they focus on resolving resource conflicts. The deliberation strategy for choosing the goals the agent will pursue is the concern of other researches (e.g., [14][15] [10] [16][17]). On the other hand, since this is a problem directly related to agent programming languages, Zatelli et al. [17] present a summary of how some languages deal with this problem. Thus, in some of these platforms, the programmer is in charge of specifying which goals must be pursued atomically in order to avoid any kind of conflict with other goals (e.g., Jason [18], 2APL [19], and JIAC [20]). Other platforms give more flexibility and allow that non-atomic goals be pursued at the same time with atomic goals (e.g., N-2APL [21], AgentSpeak(RT) [22], and ALOO [23]).

Formal argumentation, or just argumentation, is an appropriate approach for reasoning with inconsistent (conflicting) information [24]. Although argumentation has been usually used for formal reasoning555Formal reasoning has to do with reasoning about propositional attitudes such as knowledge and beliefs., there are some researches that have applied it for practical reasoning for the generation of desires and plans (e.g., [25][26][27][28]). The process of argumentation is based on the construction and the comparison of arguments (considering attacks or conflicts among them) in order to determine the most acceptable of them. The classical form of attack is usually due to the logical inconsistency between the elements that make up two arguments. However, in the resource incompatibility and superfluity the conflict arises due to other reasons as it was presented in the above examples. Thus, we have identified that in the context of practical reasoning, the meaning we give to the arguments and to the attacks can define new forms of conflicts between arguments, which can also be supported by the argumentation inferences.

Against this background, the aim of this article is to study and formalize the aforementioned three forms of incompatibility, to show how to identify each of them taking into account the plans of the agent and how to deal with them. Our proposal is based on argumentation-based reasoning, since it is a suitable approach for reasoning with inconsistent information [24]. Thus, the research questions that are addressed in this paper are: (i) Can we identify when a kind of incompatibility arises by using arguments that represent the plans of the agent? if so, how would it be done? and (ii) faced with conflicting plans, how can the set of consistent goals be chosen?.

Figure 1: General view of our argumentation-based approach. We can group the necessary steps in two levels, the plans level and the goals level. The first three steps are exclusively done over a set of plans, which are represented by means of arguments. Once the attacks among arguments are identified, they are used to determine which goals are incompatible. Finally, argumentation semantics are then used in order to find the set of goals that can be pursued without conflicts (final step).

In addressing the first question, we start with a set of goals (possible incompatible) such that each goal has a preference value and a set of plans that allow the agent to achieve it. We represent the agent’s plans by means of arguments and define the kinds of attacks that determine the incompatibilities. Regarding the second question, we use argumentation semantics in order to obtain a set of consistent goals (Figure 1 shows the workflow of the proposed argumentation-based approach). In summary, the main theoretical contributions of this paper are:

  • An arguments-based formalization of the three forms of incompatibility introduced by [7]. More specifically, our contribution is on the formalization of the resource incompatibility and the superfluity because these kinds of attacks has not been explored in the state of the art. It is specially in these kinds of attacks that the intended meaning of the arguments in the context of practical reasoning becomes clearer and leads to a novel characterization of attacks because these ones are not based on the evaluation of the inconsistency between logical formulae but they extend the notion of attack.

  • An argumentation-based approach for dealing with the incompatibilities that were identified. We provide two ways for selecting the goals the agent will commit to pursue. The first one is based on the arguments generated by the agent, and the second one is based on the set of pursuable goals. In order to identify the attacks between pursuable goals, we base on the attacks identified between arguments. The proposed approach is an answer to the need of a holistic approach that integrates and deals with more than one type of incompatibility. While it is true that the attacks for determining resource incompatibility and superfluity are new in argumentation theory, these types of conflicts have been already studied from other perspectives (see Related Work in Section 9); however, different approaches to deal with each of them were employed.

  • A theoretical study of the properties of this formalization; thus, we show that the results of this proposal satisfy the rationality postulates determined in [29], namely consistency and completeness (closure).

This work has also a practical contribution since it can be applied to real engineering problems. As it can be seen in the example, this kind of approach can be used in robotic applications (e.g., [30][31][32]) in order to endow a robot with a system that allows him to recognize and decide about the goals he should pursue. Another possible application is in the spatial planning problem, which aims to rearrange the spatial environment in order to meet the needs of a society [33]. As space is a limited resource, it causes that the planner finds conflicts in the desires and expectations about the spatial environment. These desires and expectation can be modeled as a set of restrictions and conditions, which can be considered as goals (e.g., suitability, dependency, and compatibility)[34]. Thus, the planner can be seen as a software agent that has to decide among a set of conflicting goals. Although these conflicting goals are not goals the agent wants to achieve, as in the case of the robot, the agent may use this approach in order to resolve the problem and suggest a possible arrangement of the spatial environment. It could also be applied during a design process, in which inconsistencies among design objectives may arise and this results in design conflicts [35]. In this case, agents may represent designers that share knowledge and have conflicting interests. This results in a distributed design system that can be simulated as an Multi-Agent System [36][37]. This last type of application involves more than one agent; however, since there is a conflict among goals, it can be resolved by applying the proposed approach.

The rest of the paper is organized as follows. Next section introduces basic concepts about the arguments on which this approach is based. In Section 4, we define what kinds of attacks may occur between arguments, which will lead to the identification of each form of incompatibility between goals. In Section 5, we delineate a set of properties related to the attacks. This is important because it will allow us to use the concepts of abstract argumentation in our approach. Section 6 is focused on the definition of argumentation frameworks, one for each kind of incompatibility and a general argumentation framework. In Section 7, we study how to determine the set of compatible goals by means of argumentation semantics applied to the argumentation frameworks. We present the evaluation of our proposal in terms of fulfilling the postulates of rationality in Section 8. Finally, related work is discussed in Section 9 and the conclusions and future work are presented in Section 10. All the proofs are given in an appendix at the end of the document.

2 Background

In this section, we will recall basic concepts related to the abstract argumentation framework (AF) developed by Dung [24], including the notion of acceptability and the some semantics. The reader with previous knowledge on abstract argumentation can skip reading this section. This section does not aim to be a tutorial on abstract argumentation.

Definition 1

(Argumentation Framework) An argumentation framework is a tuple where is a finite set of arguments and is a binary relation that represents the attack between two arguments of , so that denotes that the argument attacks the argument .

In argumentation theory, an acceptability semantics is a function in charge of returning sets of arguments called extensions which are internally consistent. Next, we introduce the concepts of conflict-freeness, defense, admissibility and the main semantics that we will later analyse in the Section 7 in order to determine which semantics (or family of semantics) is the most adequate for the goals selection process.

Definition 2

(Basic concepts) Let an argumentation framework and a set :

  • is conflict-free if , .

  • defends an argument iff for each argument , if , then there exist an argument such that .

  • is admissible iff it is conflict-free and defends all its elements.

Next, we define preferred semantics. This semantics is based on the notion of admissibility and with the idea of maximizing the accepted arguments. We have chosen this semantics because it can be considered a representative semantics of the family of the admissibility-based semantics.

Definition 3

(Preferred semantics) Given an argumentation framework and a set . is a preferred extension if it is a maximal (with respect to the set inclusion) admissible subset of .

Unlike preferred semantics, stage semantics are not based on admissibility. The concept of stage semantics has been introduced in [38] and further developed in [39] and, in essence, a stage extension is based on conflict-freeness666We call of conflict-free semantics to the semantics that is only based on the conflict-freeness concept.. We have chosen this semantics in order to analyse semantics that are mainly based only on conflict-freeness and the range concept. This semantics was also characterized in terms of 2-valued logical models in [40].

Definition 4

(Stage semantics) Given an argumentation framework and a set , the range of is defined as , where and and . is a stage extension iff is a conflict-free set with maximal (with respect to inclusion) range777More details about this semantics can be found in [38]..

3 Theoretical Framework

In this section, we present the main mental states of the agent and the representation of his plans by means of arguments, which are called instrumental arguments.

We start by presenting the propositional logical language that will be used. Let be a propositional language used to represent the mental states of the agent, stands for the inference of classical propositional logic, and denote truth and falsum respectively, and denotes classical equivalence. We use lowercase roman characters to denote atoms and uppercase Greek characters to denote formulae, such that an atomic proposition is a formula. If is a formula, then so is . If and are formulae, then so are , and . Finally, if is a formula, then so is .

From , we can distinguish the following finite sets:

- The set , which denotes the beliefs of the agent.
- The set , which denotes the goals of the agent.
- The set , which denotes the resources of the agent.
- The set , which denotes the actions of the agent.

, , , and are subsets of literals888A literal is either an atomic formula or the negation of an atomic formula. When a literal is an atomic formula, we say that it is a positive literal, and when a literal is the negation of an atomic formula, we say it is a negative literal. from the language . It also holds that , and are pairwise disjoint.

Example 1

Considering the scenario of the cleaner world and example given in the introduction, we next present some possible beliefs, goals, resources, and actions of agent BOB:

  • Beliefs: there is solid dirt in slot (3,4), the trash can is full, and slot (1,2) is clean.

  • Goals: clean a given slot, recharge battery, and clean the whole environment.

  • Resources: battery, oil, and spare part.

  • Actions: go to the next slot, use the spin mop to clean a given slot, and empty the trash can.

The agent is also equipped with a set of plans that allow him to achieve his goals. In order to analyze the possible conflicts that may arise among goals, we express the agent’s plans in terms of arguments, which are called instrumental arguments. The use of instrumental arguments for representing plans is not a novelty. Rahwan and Amgoud [28] define this kind of argument, which is structured like a tree where the nodes are planning rules999In this work, a plan rule is a building block structure that is used to construct a partial plan, which in turn is a building block that is used to construct an instrumental argument. It is important to differentiate the plan rules used in this work with the plan rules used in classical planning. In this work, the plan rules make up already defined plans, whereas the plan rules used in classical planning support the generation of new plans. whose components are a set of desires and a set of resources in the premise and a desire in the conclusion. Analogously, we use a set of goals and resources in the premise (goals in the premise can be seen as sub-goals) and a goal in the conclusion. Additionally, we also include a set of beliefs and a set of actions, because if the agent wants to achieve the goal in the conclusion of the rule he needs that some beliefs are true and he also needs to be able to perform some actions. For example, for reaching the goal “be fixed” it is necessary that a certain spare part is available. Thus, agent needs that the belief “available spare part” holds true in order to achieve “be fixed”. Regarding the actions, in order to achieve a goal it is necessary that some actions be performed. For instance, for agent clean all the environment it is necessary that he moves from one slot to the next. Thus, in this work, a plan rule consists of a finite set of beliefs, a finite set of goals, a finite set of actions, and a finite set of resources in its premise

It is also important to highlight that some of the sets of the premise of a plan rule may be empty. For example, not all of the goals always have sub-goals because if it would happen, it would cause an infinite sequence of calls to sub-plans and a top goal would never be reached.

These plan rules can be the result of an automated planner or can be obtained by gathering information from experts of an application domain. For instance, in [41, 42], the authors propose an approach where fragments of human activities are built from a set of observations (beliefs) of the world, a finite set of actions. We can compare a fragment of an activity with an instrumental argument; hence, a fragment of activity defines a context of a given goal such as a plan rule.

Regarding the resources in the premise of a plan rule, the necessary resource and its necessary amount varies for each plan rule. Thus, let be an infinite set of ground atoms that denote a givenresource along with a given quantity, which is expressed numerically. Then, we have that . For example, assume that , where is the name that denotes the resource battery. We may have such that and the ground atoms and denote that 10 units of battery and 50 units of battery are necessary, respectively.

Notice that we use the suffix for denoting resources in and the suffix for denoting resources in .

Definition 5

(Plan rule) A plan rule is an expression of the form , where (for all ), , , (for all ), (for all ), and (for all ).

It expresses that can be achieved101010Achievement goals represent a desired state that an agent wants to reach [43]. if beliefs are true, sub-goals can be achieved, actions can be performed, and resources are available. It is important to state that the number of elements in the body of a plan rule is finite. Finally, let be the base containing the set of plan rules.

Since there may be goals in the premise of a plan rule, this means that top goals (i.e. goals in the conclusion) may be decomposed into sub-goals, which in turn can be decomposed into sub-sub-goals. Each of these sub-goals has also a plan rule associated to it, which means that there is (at least) a plan rule for each goal.

Example 2

Considering the scenario presented in the introduction section, let us introduce some examples of plan rules. Suppose that the environment is a square of 5 5. For the environment to be completely clean, all the slots have to be clean.

Let us present the beliefs, actions, goals, and resources that make part of the premises of the plan rules. The beliefs the agent should hold are: and . The actions the agent should perform are: , and . The goals the agent should achieve are: , and . We use goal to refer to the environment as a whole and we use goals to refer to each slot of the environment. Thus, to achieve the goal , all the dirty slots have to be cleaned. Lastly, the resources that have to be available are , and . We assume that the agent needs 10 units of battery to go from a slot to the next, 20 units of battery to use the vacuum, and 10 units of battery to use the mop.

(1)
(2)
(3)
(4)
      
(5)
(6)
      
      -
      
      -
      
(7)
      
(8)

The first plan rule is associated to the general goal of the agent; it is to have cleaned a given environment. The next three plan rules are related to the type of dirt the robot has to clean or the absence of dirt. Plan rule number five expresses what the robot needs in order to be fixed. The next two plan rules expresses what the robot needs in order to achieve goals mopping and picking up dirt of a given slot. Plan rule eight expresses what is necessary for the robot get the workshop, for the sake of simplicity, summarizes a set of actions (e.g. turn to the left, walk one slot, etc.).

We can now define the architecture of an intelligent agent, which is an instantiation of BDI model.

Definition 6

(Agent) An intelligent agent is a tuple where:

  • is the knowledge base of the agent,

  • is the set of plan rules,

  • is the set of pursuable goals111111We do not consider any temporal information about the order in which the goals should be pursued or the time the goals have to achieved.,

  • is a function that returns the preference value of a given goal such that 0 stands for the minimum preference value and 1 for the maximum one,

  • is a resource summary, which contains the information about the available amount of every resource of the agent. We assume that is normalised so that each resource appears exactly once and that all the resources represented in have their corresponding available amount in . Let a function that returns the currently available amount of a given resource; thus, denotes the availability of resource .

Based on his knowledge base and the set plan rules , the agent can build partial plans, which are the building blocks of instrumental arguments, which in turn represent complete plans. The idea is that each element of a complete plan, namely a belief, an action, a resource, or a goal, is represented by a partial plan, which can be seen as a standard argument, whose claim may be a belief, an action, a resource, or a goal.

Definition 7

(Partial plan) A partial plan is a pair of the form where:

  • and , or

  • and , or

  • and , or

  • and such that

A partial plan is called elementary when . Let us call the support of the partial plan and its conclusion. As for notation, and denote the conclusion and support of the partial plan , respectively.

Example 3

Let us express as partial plans some of the plan rules introduced by Example 2:



Partial plan has in its conclusion a goal, and the partial plans and have an action and a belief, respectively.

Based on partial plans, we can now define instrumental arguments, which correspond to complete plans.

Definition 8

(Instrumental argument, or complete plan) An instrumental argument is a pair such that , and is a finite tree such that:

  • The root of the tree is a partial plan ,

  • A node has exactly children , , , where each (), (), (), () is a partial plan,

  • The leaves of the tree are elementary partial plans.

Let be the set of all instrumental arguments that can be built from the knowledge base of the agent. We assume that each goal in has at least one instrumental argument. We will use function to return the set of partial plans of and to return the claim of an instrumental argument . We also use the following function to return the set of arguments whose claim is a given goal: and .

As we said before, instrumental arguments have been already employed for representing plans. We have defined an intrumental argument in the same way as it is defined in [28], i.e., as a tree of partial plans. However, there are two differences between our definition and the definition given in [28], which are mainly related to the elements of the partial plans. Thus, in [28], the conclusions of elementary partial plans are only beliefs whereas according to our definition, the elementary partial plans may have as conclusions beliefs, actions, or resources. The other difference is related to the elements of the non-elementary partial plans. Since a non-elementary partial plan is built from a plan rule, it may have in its premise actions and resources, besides beliefs and goals, which are the elements considered in [28].

Example 4

Figure 2 shows two instrumental arguments. Argument represents a complete plan for goal and argument represents a complete plan for goal . Notice that argument is a sub-argument of .

Figure 2: Instrumental arguments and for Example 4. Dashed-border squares represent the leaves of the three.

4 Attacks between Arguments

In this section, we focus on the identification of attacks among instrumental arguments, which will lead to the identification of each form of incompatibility among goals. The kind of attack depends on the form of incompatibility. We have identified one type of attack for each form of incompatibility. These conflicts between arguments are defined over and are captured by the binary relation (for ) where each sub-index denotes the form of incompatibility. Thus, denotes the attack for terminal incompatibility, the attack for resource incompatibility, and the attack for superfluity. We denote with the attack relation between arguments and . In other words, if , it means that argument attacks argument .

4.1 Rebuttal between partial-plans

We can define the terminal incompatibility in terms of attacks among instrumental arguments. In this attack, the beliefs, the goals, and the actions of each partial plan of an argument are taken into account. Thus, an instrumental argument attacks another instrumental argument when the conclusion of a partial plan of is the negation of the conclusion of a partial plan of and both arguments correspond to plans that allow to achieve different goals. It is important to make two remarks: (i) the nature of the two conclusions has to be the same, i.e. both conclusions have to be or beliefs, or actions, or goals, and (ii) resources are not taken into account in this conflict. Formally:

Definition 9

(Partial-plans rebuttal - ) Let be two arguments, and be two partial plans. We say that occurs when:

  • ,

  • such that or , or .

We can observe that is symmetric.

Proposition 1

If , then .

Example 5

Let be two pursuable goals of robot . Figure 3 shows argument for goal and argument for goal . Consider also argument , of Figure 2, for goal . We can observe three sub-arguments: , whose claim is goal , is the sub-argument of , , whose claim is , is the sub-argument of , and , whose claim is goal , is the sub-argument of . From these arguments, the partial-plans rebuttals that can be identified are: .

Figure 3: Arguments , and for Example 5. There is a partial-plans rebuttal between arguments and , , and , and , and and . The double-border squares highlight the conflicting partial plans. Dashed-border squares represent the leaves of the three.

4.2 Attack for identifying the resources incompatibility

Two arguments are incompatible due to resources because the agent has no enough resources for performing the plans represented by both arguments. Thus, the attack due to resources between two arguments has to reflect this fact. In order to deal with resource conflict, we first define a resource consumption inference that works exclusively for reasoning about resources. This inference considers the availability of a given resource and the amount of it that is necessary. Recall that function –introduced in Definition 6– returns the available amount of a given resource; however, the necessary amount has to be obtained from the two arguments whose resource incompatibility is being evaluated. The following steps are carried out in order to obtain this value.

  1. First of all, we put together all the same necessary resources of the two arguments in a formula (let us call it ). This means that there is a different for each different resource that both arguments need. Thus, the formula is a conjunction of atoms that represent a resource and the necessary amount of it. Such atoms are part of one or more plan rules that make up the two arguments. Hence, we have that where . For example, argument needs 70 units of battery and argument needs 30 units of battery; hence, .

  2. The second step is related to the signature of . Let us denote the signature of by . Continuing with the example, .

  3. Finally, we can sum up the necessary amount of a given resource: . Finalizing the example, we have that .

Once we have the available amount and the necessary amount of a given resource, we can define the resource-consumption inference. This type of inference resembles other consumption inferences introduced by other consumption and production resources logics like [44].

Definition 10

(Resource-consumption inference - ) Let be the set of available resources of the agent and be a conjunction of atoms such that . satisfies a formula (denoted by ) when .

The following notation will be used for defining the resource attack. denotes the set of resources necessary for an argument :

Definition 11

(Resource attack - ) Let be two instrumental arguments, be the set of resources necessary for argument , and be the set of resources necessary for argument . We say that occurs when:

  • such that and ,

  • ,

  • , this means that is resource-inconsistent.

We can see that is symmetric.

Proposition 2

If , then .

Example 6

Let us recall that are the two pursuable goals of agent . Figures 2 and 3 show arguments and . Notice that argument needs 60 units of battery (), argument needs 70 units of battery (), and argument needs 30 units of battery (). Recall that .

Notice that for achieving goal , there are two arguments with different needs of battery namely and , and for achieving goal there is only one argument namely . Considering arguments and , we have , in this case we can say that because the agent has enough resources for performing both plans. Otherwise, considering arguments and , we have , in this case we can say that because there is no enough energy for performing both plans. Note that is the same for arguments and ; therefore, we can say that exists resource attack between and , and also between and . The same reasoning applies to arguments and and and . Thus, we have the following attack relations: .

4.3 Superfluous conflict

Superfluity emerges when two plans lead the agent to the same end, in other words, when two arguments have the same claim. Unlike other contexts, in practical reasoning the fact that two arguments support the same claim is considered unnecessary, or even worse, a waste of time or resources, because it means that the agent performs two plans when only one is necessary for achieving a given goal. Superfluity can be defined in terms of non-elementary partial plans, i.e. in terms of partial plans whose conclusions are goals. Before presenting the definition of superfluous attack, we will analize the following situations:

  • Consider argument (Figure 2) and argument (Figure 3). Both arguments have the same claim but different supports. This means that there is a superfluous attack between and .

  • The above situation is the clearest way for identifying a superfluous attack. The question is: what happens with the sub-arguments of and ? If we have the set of arguments and only there is a conflict between and it means that the agent can perform, for example, the plans represented by arguments and , which means that the agent will perform two plans that lead to the same end because both and allow the agent to achieve . Therefore, there also should be a superfluous attack between and . We can conclude that the sub-arguments of two arguments that attack each other by means of a superfluous attack, also attack each other. Nevertheless, we noticed that there is an exception. Suppose that both and have a sub-argument , such that is part of the three of and is part of the three of . If all the sub-arguments of attack all the sub-arguments of , this means that attacks and attacks , which leads to an attack between two sub-arguments of the same three. Thus, we finally conclude that all the sub-arguments of should attack all the sub-arguments of , except those ones that are the same in both threes.

  • We have analized the relation between the sub-arguments of and ; however, we have not analized the relation between and the sub-arguments of and vice-verse. The case is similar to the previous analyse. Suppose that there is a superfluous attack between and and there are superfluous attacks between their sub-arguments. This means that the agent could perform the plans represented by arguments and , which is also a superfluous situation. Thus, we conclude that there should be also a superfluous attack between argument and the sub-arguments of and vice-verse. In this case, we also consider the exception described in the previous item.

Next, we present the definition of superfluous attack taking into consideration the previous analysis.

Definition 12

(Superfluous attack - ) Let be two arguments. We say that occurs when either of the following cases hold:

  1. Case 1:

    • ,

    • .

  2. Case 2:

    • ,

    • such that ;

    • and .

  3. Case 3:

    • ,

    • such that ;

    • ,

    • .

When there is a superfluous attack between two arguments, we say that the goals in their conclusions are superfluous conflicting goals. Lastly, relation is symmetric.

Proposition 3

If , then .

Example 7

Consider arguments and (Figure 3), arguments and (Figure 2) and argument (Figure 4). Arguments and and arguments and have the same claim namely and , respectively. Hence there is a superfluous attack between and and between and . According to the definition of superfluous attack, this is extended to the sub-arguments of these arguments. Thus, we have that. The attacks that emerge due to Case 1 are: , the attacks that emerge due to Case 2 are: , and the attacks that emerge due to Case 3 are: .

Figure 4: Argument for Example 7. Dotted-border squares represent the leaves of the three.

5 Postulates concerning Attack Relations

In this section, we study a set of properties that are relevant for the attack relations defined in Section 4. These are based on the postulates presented by Gorogiannis and Hunter [45], which describe desirable properties of the different kinds of attacks that may occur between logical arguments121212You can find the definitions of the main kinds of attacks in [46].. In this work, we have proposed three kinds of attacks – related to the instrumental arguments – and these properties are important to guarantee that our approach can be seen as an instance of the abstract argumentation and; therefore, it will benefit from the techniques of abstract argumentation frameworks, explicitly for be able to employ abstract argumentation semantics.

We begin defining the concept of equivalence because it is essential for the understanding most of the properties. We consider three types of equivalence: (i) the logical one, that takes into account the logical structure of the arguments, (ii) the resource equivalence, that considers only the resources, and (iii) the whole equivalence, that takes into account both the logical structure and the resources of an argument.

Definition 13

(Logical equivalence ) Two instrumental arguments and are logically equivalent when (i) and (ii) and . We denote the logical equivalence by .

In this approach, we can compare two arguments from the point of view of their logical structure or from the point of view of resources necessary for performing the plans represented by the arguments. Next definition states when two arguments are equivalent taking into account their resources.

Definition 14

(Resource equivalence ) Two instrumental arguments and are equivalent by resources if . We denote the resource equivalence by .

The whole equivalence is defined over the logical equivalence and the resource equivalence definitions.

Definition 15

(Whole equivalence ) Two instrumental arguments and are wholly equivalent if (i) they are logically equivalent and (ii) they are equivalent by resources. We denote the whole equivalence by .

From now on, , , and their primed versions will stand for arguments. Let us recall that , , and stands for the partial-plans rebuttal, the resource attack, and the superfluous attack, respectively.

The next six propositions take into account the notion of equivalence to identify attacks that originate due to the equivalent arguments. Proposition 4 holds for the partial-plans rebuttal and the superfluous attack, in which only the logical part of the argument is evaluated; therefore, it does not hold for the resource attack. Otherwise, Proposition 5 holds for resource attack and Proposition 6 holds for any of the types of attacks between arguments. Proposition 7 mandates that if there is a resource attack between an argument and another argument then there is a resource attack between and all arguments that are resource equivalent with . In a similar way, Proposition 8 mandates that if there is a partial-plans rebuttal or superfluous attack between arguments and then there is a partial-plans rebuttal or superfluous attack, respectively, between and all arguments that are logically equivalent with . Lastly, Proposition 9 states that if there is a partial-plan (resource or superfluous) attack between arguments and then there is a partial-plan (resource or superfluous) attack between and all arguments that are wholly equivalent with .

Proposition 4

If and then (for ).

Proposition 5

If and then .

Proposition 6

If and then = (for ).

Proposition 7

If and then .

Proposition 8

If and then (for