DeepAI
Log In Sign Up

Applying Abstract Argumentation Theory to Cooperative Game Theory

05/15/2019
by   Anthony P. Young, et al.
0

We apply ideas from abstract argumentation theory to study cooperative game theory. Building on Dung's results in his seminal paper, we further the correspondence between Dung's four argumentation semantics and solution concepts in cooperative game theory by showing that complete extensions (the grounded extension) correspond to Roth's subsolutions (respectively, the supercore). We then investigate the relationship between well-founded argumentation frameworks and convex games, where in each case the semantics (respectively, solution concepts) coincide; we prove that three-player convex games do not in general have well-founded argumentation frameworks.

READ FULL TEXT VIEW PDF
12/20/2013

The DIAMOND System for Argumentation: Preliminary Report

Abstract dialectical frameworks (ADFs) are a powerful generalisation of ...
07/16/2014

A Plausibility Semantics for Abstract Argumentation Frameworks

We propose and investigate a simple ranking-measure-based extension sema...
07/19/2017

Argotario: Computational Argumentation Meets Serious Games

An important skill in critical thinking and argumentation is the ability...
11/04/2019

Compiling Arguments in an Argumentation Framework into Three-valued Logical Expressions

In this paper, we propose a new method for computing general allocators ...
06/18/2018

Notes on Abstract Argumentation Theory

This note reviews Section 2 of Dung's seminal 1995 paper on abstract arg...
12/16/2013

Strategic Argumentation is NP-Complete

In this paper we study the complexity of strategic argumentation for dia...

1 Introduction

Argumentation theory

is the branch of artificial intelligence (AI) that is concerned with the rational and transparent resolution of disagreements between arguments (e.g.

[17]). Abstract argumentation theory, as articulated in Dung’s seminal paper [6], abstracts away from the contents of the arguments and the nature of their disagreements. The resulting directed graph (digraph) representation of arguments (nodes) and their disagreements (directed edges), called an abstract argumentation framework (AF), is simple yet powerful enough to resolve these disagreements and determine the sets of winning arguments.

Dung demonstrated the “correctness” of abstract argumentation by showing how abstract argumentation “can be used to investigate the logical structure of the solutions to many practical problems” [6, Section 3]. Specifically, he investigated two examples of problems from microeconomics (e.g. [13]): cooperative game theory and matching theory. In each case, Dung showed how an appropriate AF can represent a given cooperative game or a given instance of the stable marriage problem, and that the sets of winning arguments in such AFs correspond to meaningful solutions in both of these domains.

In this paper, we further demonstrate the “correctness” of abstract argumentation theory by investigating its relationship with cooperative game theory. Cooperative game theory (e.g. [5]) is the branch of game theory (e.g. [26]) that studies how normatively rational agents may cooperate in order to possibly earn more payoff than they would do as individuals. Cooperative game theory abstracts away from individual agents’ strategies in such games for a simpler and “high level” view of their interactions.111The nature of this cooperation is exogenous to the theory, but can be interpreted as groups of agents forming binding contracts. See, e.g. [5, page 7]. Each cooperative game consists of finitely many agents that can cooperate as coalitions, and each coalition earns some payoff as measured by its value. Given certain standard assumptions on the values of coalitions, all agents should cooperate as a single coalition, called the grand coalition. However, this still leaves open the question of how the payoff obtained by the grand coalition (which is already maximised) should be distributed among the individual agents, such that no agent should want to defect from the grand coalition. Historically, the first solution concept for cooperative games that captures this idea is the Von Neumann-Morgenstern (vNM) stable set, where each such set consists of such payoff distributions interpreted as an “acceptable standard of behaviour” [26]. Subsequently, Dung showed that each possible payoff distribution can be interpreted as an argument in an AF. Payoff distributions “disagree” when agents can defect because they can earn strictly more. This argumentative interpretation of cooperative games allowed Dung to demonstrate that the stable extensions of the AF of each cooperative game correspond exactly to the game’s vNM stable sets [6, Theorem 37].

However, just like that stable extensions of an AF may not exist, vNM stable sets for cooperative games also may not exist [10, 11]. As a result, alternative solution concepts have been proposed. For example, Dung proposed that sets of payoff distributions that form preferred extensions could serve as an alternative solution concept, because preferred extensions of AFs always exist, and therefore this is well-defined for all cooperative games. Other possible alternative solution concepts from cooperative game theory include the core [7], the subsolution and the supercore [20]. Dung showed that the core corresponds to the set of unattacked arguments of the game’s AF [6, Theorem 38]. This paper’s first contribution is to finish the correspondence between Dung’s four argumentation semantics and the various solution concepts from cooperative game theory by proving that the complete extensions (respectively, the grounded extension) of the AF correspond(s) to the subsolutions (respectively, the supercore) of the cooperative game. These correspondences allow us to characterise when the supercore of a cooperative game is non-empty via the Bondareva-Shapley theorem [2, 23]: exactly when the game is balanced (see Section 3.2).

Shapley investigated a special class of cooperative games called convex games, which exhibit the property where the larger a coalition grows, the more incentive there is for agents to join it; the key property of each such game is that its core is its unique stable set [24]. In abstract argumentation theory, a similar result by Dung states that if an AF is well-founded, its grounded extension is its unique stable set [6, Theorem 30]. Given these similar results, we would like to know whether convex games always give rise to well-founded AFs. Our second contribution in this paper is an answer to this question in the negative through a three-player counter-example.

The rest of this paper is structured as follows. In Section 2, we review abstract argumentation theory and cooperative game theory. In Section 3, we complete the correspondences between Dung’s four argumentation semantics with solution concepts in cooperative games and study the properties using well-known results from argumentation. In Section 4, we recap convex games and well-founded AFs, and give a counter-example of a three-player convex game that gives rise to a non-well-founded AF. In Section 5, we compare our results with related work in argumentation theory and game theory, and conclude with future work.

2 Background

Notation: Let be sets. is the power set of and is the cardinality of . is the set of natural numbers (including ) and is the set of real numbers. Further, (respectively, ) is the set of positive (respectively, non-negative) real numbers. For , the -fold Cartesian power of is . If and are appropriate functions, then is the composition of then , and functional composition is denoted with . If is a set, then for a function , abbreviates .

2.1 Abstract Argumentation Theory

An (abstract) argumentation framework222When defining our terms in this paper, any words in between brackets may be omitted when using the term, e.g. in this case, the terms “argumentation framework” and “abstract argumentation framework” are interchangeable. (AF) is a directed graph (digraph) where is the set of arguments and denotes the attack relation, where for , , abbreviated by , denotes that disagrees with . Let be a set of arguments for the remainder of this subsection. Define to be the set of all arguments attacked by . The neutrality function is where denotes the set of arguments not attacked by , i.e. the set of arguments that is neutral towards. We say is conflict-free (cf) iff . Similar to , let , and for , . The defence function is where iff .333In [6, Section 2.2] this is called the characteristic function. It can be shown that [6, Lemma 45]. Let denote the set of unattacked arguments. It can be shown that . We say is self-defending (sd) iff . Further, is admissible iff it is cf and sd.

To determine which sets of arguments are justifiable we say is a complete extension iff is admissible and .444i.e. if one can defend a proposition then one is obliged to accept it as justifiable. Further, is a preferred extension iff it is a -maximal complete extension [27, Theorem 7.11], is a stable extension iff and is the grounded extension iff it is the -least complete extension. These four semantics are collectively called the Dung semantics, and each defines sets of winning arguments given .

2.2 Cooperative Game Theory

We recapitulate the basics of cooperative game theory (see, e.g. [5]). Let and be our set of players or agents.555We use “” instead of the more traditional “” for the number of players to avoid confusion with the neutrality function in argumentation. We assume . A coalition is any subset of . The empty coalition is and the grand coalition is .

Example 1

Consider the set of players , where . In our examples, agents 1, 2 and 3 are respectively named Josh, David and Peter. If all three decide to work together, then they form the grand coalition . If only Josh and David work together and Peter works alone, then the resulting coalitions are, respectively, and .

A valuation function is a function such that . The number can be thought of as the coalition’s payoff as a result of the agents’ coordination of strategies; this payoff is in arbitrary units.

Example 2

(Example 1 continued) If Josh, David and Peter work together and earn units of payoff, then . If Josh and Peter will lose units of payoff if they work together, then .

Given and , with , a (cooperative) (-player) game (in normal form) is the pair . The following five properties are standard for . We say is non-negative iff ; this excludes valuation functions such as the one in Example 2. We say is monotonic iff for all , if , then . We say is constant-sum iff . We say is super-additive iff for all , if and are disjoint then . We say is inessential iff ; inessential means there is no incentive to cooperate.

It is easy to show that if is non-negative and super-additive, then is monotonic, while the converse is not true (e.g. [4, Example 1]). For the rest of the games in this paper, we will assume is non-negative, super-additive and essential (i.e. not inessential).666When combined with super-additivity, it follows that there are two disjoint coalitions and such that .

Example 3

(Example 1 continued) Suppose if Josh, David or Peter earn no payoff if they work as individuals, but if any two of them work together they earn units of payoff, and if all three work together they earn units of payoff. This is non-negative, super-additive, not constant-sum, and essential.

An outcome of a game is the pair , where is a partition of called a coalition structure, and is a

payoff vector

that distributes the value of each coalition to the players in that coalition. As we have assumed that is non-negative, super-additive and essential, then has the (strictly) largest payoff among all coalitions by monotonicity. Agents are rational and want to maximise their payoff, and so they should seek to form the grand coalition. Therefore, we restrict our attention to outcomes where .

How should the amount be distributed among the players? In this paper, we consider transferable utility games, which allow for the distribution of arbitrarily to the players, e.g. by interpreting as money, which all players should desire. This leads to the following properties of payoff vectors . We say is feasible iff , efficient iff and individually rational iff . We call a payoff vector an imputation iff

is efficient and individually rational; intuitively, imputations distribute all the money to every agent without waste, such that every agent earns at least as much as when they work alone. Following

[6], we denote the set of imputations for a game with , or just if it is clear which cooperative game we are referring to.

Example 4

(Example 3 continued) As (which we now measure in dollars, as an example of transferable utility), a feasible payoff vector is . An efficient payoff vector is . Notice that both vectors are individually rational because . Therefore, is a valid imputation, in which case Josh receives , while David and Peter receive each.

The solution concepts of cooperative games that we will consider are concerned with whether coalitions of agents are incentivised to defect from the grand coalition.777See Section 5 for a brief mention of other solution concepts. Given a game , let be a coalition and . We say dominates via , denoted , iff (1) and (2) . Intuitively, given imputation , it is possible for a subset of players to defect from to form their own coalition , where (1) they will each do strictly better because (2) they will earn enough to do so; the resulting payoff is . Note that for all , is a well-defined binary relation on .

Example 5

(Example 4 continued) Consider the two imputations and . Clearly, , because David (agent 2) and Peter (agent 3) can defect to and earn , which is strictly better than both of them getting nothing in the imputation .

It is easy to see from the definition of that it is irreflexive, acyclic (and hence asymmetric), and transitive. Further, some important special cases include and , i.e. it does not make sense for the grand coalition to defect to the grand coalition, and individual players cannot defect from (the latter due to individual rationality). Also, , the total binary relation on . We say imputation dominates imputation , denoted , iff . This is a well-defined irreflexive binary relation on . However, it can be shown that this is not generally transitive, complete or acyclic (e.g. [25, Chapter 4]). Therefore, each cooperative game gives rise to an associated directed graph , called an abstract game [20].

Let be a set of imputations. Following [26], we say is internally stable iff no imputation in dominates another in . Further, is externally stable iff every imputation not belonging to is dominated by an imputation from . A (von Neumann-Morgenstern) stable set is a subset of that is both internally and externally stable. Taken together, a stable set of imputations contains the distributions of the amount of money to the set of players that are socially acceptable [26].

We now recapitulate a simplification to coalitional games that does not lose generality (e.g. [25, Chapter 4]). Let and be two games on the same set of players. We say and are strategically equivalent iff , for , and we denote this with ; this is an equivalence relation between games. Further, the function with rule is a digraph isomorphism from to , where is the corresponding domination relation on . It follows that if is a stable set of , then its image set is also a stable set of . By setting and for this , then we can transform to its -normalised form, , such that and .

Example 6

(Example 5 continued) We have that and , therefore if , if , and .

This means is the standard -dimensional simplex, which is the set .

Corollary 1

As a set, is uncountably infinite.

Proof

Let denote bijection between sets and denote an injective embedding between sets, then we have , where in this case is the injective function with rule , where normalises the result to be on the simplex. By the Cantor-Schröder-Bernstein theorem [9, Theorem 3.2], .

Therefore, the abstract game has uncountably infinitely many nodes. From now, we assume all games have that are non-negative, super-additive, not necessarily constant-sum, have transferable utility, and are -normalised.

2.3 From Cooperative Game Theory to Abstract Argumentation

In this section we recap [6, Section 3.1]. We now understand how an abstract game , which is also an uncountably infinite digraph, arises from a game . Dung interprets as an abstract AF, where each argument is an argument for a given payoff distribution, and each attack denotes the possibility for a subset of agents to defect from the grand coalition. Corollary 1 states that such an AF has uncountably infinite arguments, but this is not a problem because Dung’s argumentation semantics and their properties hold for AFs of arbitrary cardinalities [1, 6, 27]. Dung then proved that various methods of resolving conflicts in as an AF correspond to meaningful solution concepts of . The following result is straightforward to show.

Theorem 2.1

[6, Theorem 37] Let be a game and be its abstract game. If we view as an AF, then each of its stable extensions is a stable set of , and each stable set of is a stable extension of .

As stable sets may not exist for AFs, and in particular there are games without stable sets [10, 11], Dung proposed that preferred extensions, as they always exist [6, Theorem 11(2)], can serve as an alternative solution concept for a game in cases where stable sets do not exist, because the properties and motivations of preferred extensions also capture the imputations that are rational wealth distributions among the players [6, Section 3.1].

Further, another important solution concept in cooperative game theory is the core [7]. Formally, the core of a cooperative game is the set of imputations satisfying the system of inequalities . Intuitively, the core is the set of imputations where each agent is receiving at least as much payoff even if a subset of such agents were to defect to a new coalition (regardless of how the payoff is shared within that coalition). Therefore, no agent has an incentive to defect. It can be shown that the core is the subset of imputations that are not dominated by any other imputation. It follows that:

Theorem 2.2

[6, Theorem 38] Let be a game and be its abstract game. If we view as an AF, then its set of unattacked arguments corresponds exactly to the core.

From argumentation theory, we thus conclude the well-known result from cooperative game theory that stable sets, if they exist, always contain the core.

Example 7

(Example 6 continued) We have and for , and . The eight possible coalitions give rise to the three inequalities and . Therefore, the core consists of all imputations whose components satisfy these three inequalities, for example, and .

3 Complete Extensions and the Grounded Extension

Given that stable extensions correspond to stable sets, and the set of unattacked arguments corresponds to the core, do the complete extensions (including the preferred extensions) and the grounded extension also correspond to solution concepts in cooperative games? We now show that the answer is yes.

3.1 Complete Extensions Correspond to Subsolutions

As preferred extensions are a subset of complete extensions, it is natural to ask whether complete extensions more generally correspond to solution concepts in cooperative games. In [20], motivated by the general lack of existence of stable sets [10, 11], Roth considered abstract games arising from cooperative games (in his notation) , where is the set of the game’s outcomes and is an abstract domination relation. Let be the function ,888In [20], Roth uses for this function. Here, we use to avoid confusion with the set of unattacked arguments in an AF (defined in Section 2.1). where in this case . Roth then defined the subsolution of such an abstract game as follows.

Definition 1

[20, Section 2] Let be an abstract game. A subsolution is a set such that and .

Immediately we can see that by interpreting as an abstract argumentation framework, subsolutions are precisely the complete extensions.

Theorem 3.1

Let be a game and be its abstract game. When seen as an argumentation framework, the complete extensions are precisely the subsolutions.

Proof

is a complete extension of iff and , where is the neutrality function , but as [6, Lemma 45], this is equivalent to saying that is a subsolution, by identifying , and .

Roth has shown that every abstract game, and hence every cooperative game, has a subsolution [20, Theorem 1].999Also, see the abstract lattice-theoretic proof in [19]. The -maximal subsolutions of are exactly the preferred extensions of when seen as an abstract argumentation framework. Roth also showed that stable sets are subsolutions, and hence subsolutions generalise stable sets. This is well known in argumentation theory as stable extensions are also complete extensions. We can also apply further results from abstract argumentation theory to infer more properties of subsolutions. For instance, the core is contained in all subsolutions.

Corollary 2

Every subsolution is a superset of the core.

Proof

Interpreting as an argumentation framework, the core corresponds to the set of unattacked arguments, which is , where is the defence function. As a subsolution is a complete extension, we know that . As is -monotonic and , we have .

Further, subsolutions have a specific lattice-theoretic structure:

Theorem 3.2

The family of subsolutions of an abstract game form a complete semilattice that is also directed-complete.

Proof

This follows from [6, Theorem 25(3)] and [27, Theorem 6.30], respectively.

We will give an example of subsolutions in Section 4 (Example 8).

3.2 The Supercore is the Grounded Extension

In [20, Example 5.1], Roth showed that subsolutions are in general not unique for abstract games; the supercore is one “natural” way of selecting a unique subsolution.

Definition 2

[20, Section 3] The supercore of an abstract game is the intersection of all its subsolutions.

Immediately we can conclude the following.

Theorem 3.3

The supercore of an abstract game is its grounded extension when viewed as an argumentation framework.

Proof

This follows from e.g. [6, Theorem 25(5)], or [27, Corollary 6.8].

It follows that the supercore exists and is unique for all abstract games, and hence cooperative games. Further, the supercore is a special case of a subsolution because the grounded extension is complete. Also, the supercore contains the core, because the grounded extension contains all unattacked arguments, by Corollary 2. We will give an example of the supercore in Section 4 (Example 8).

Therefore, for arbitrary cooperative games, if stable sets exist, they can serve as the possible sets of recommended payoff distributions, i.e. the “acceptable standards of behaviour” [26]. If they do not exist, then we may use subsolutions instead. If we desire a unique subsolution, we can use the supercore.

However, Roth noted that the supercore is empty iff the core is empty. This corresponds to the well-known result in argumentation theory that the set of unattacked arguments is empty iff the grounded extension is empty (e.g. [27, Corollary 6.9]). We can therefore completely characterise cooperative games with non-empty supercores using the Bondareva-Shapley theorem [2, 23]. Let be a cooperative game. A function is balanced iff , i.e. if for every player the value under of all coalitions containing that player sum to one. We call balanced iff for every balanced function , . Intuitively, each player allocates a fraction of his or her time to the coalition , and that coalition receives a value proportional to that agent’s time spent there.

Theorem 3.4

(Bondareva-Shapley) A game has a non-empty core iff it is balanced.

It follows that:

Corollary 3

A game has a non-empty supercore iff it is balanced.

Proof

The core of a game is non-empty iff its supercore is non-empty, iff it is balanced, by the Bondareva-Shapley theorem (Theorem 3.4).

From both argumentation theory and abstract games in cooperative game theory, we have the following well-known containment relations between the solution concepts in cooperative games.

Theorem 3.5

Let be a cooperative game. Its stable sets are -maximal subsolutions, which are subsolutions, and the supercore is also a subsolution.

Proof

Immediately, because in argumentation theory, stable extensions are preferred, which are complete. Further, the grounded extension is also complete.

3.3 Summary

We summarise the correspondences between Dung’s argumentation semantics and the solution concepts of cooperative games in the Table 1 , including the results presented in this paper.

Abstract Argumentation Cooperative Game Reference
Set of arguments Set of imputations [6, Section 3.1]
Attack relation Domination relation [6, Section 3.1]
Argumentation Framework Abstract Game [6, Section 3.1]
Unattacked arguments The Core [6, Thm. 38]
The Grounded Extension The Supercore Thm. 3.3
Complete Extensions Subsolutions Thm. 3.1
Preferred Extensions -maximal Subsolutions [6, Section 3], Thm. 3.1
Stable Extensions Stable Sets [6, Thm. 37]
Table 1: A Table Showing Concepts in Abstract Argumentation and the Corresponding Concept in Cooperative Games

Further, we have shown that the supercore exists, and is unique and non-empty, iff the game is balanced (Corollary 3). Finally, the family of subsolutions form a complete semilattice that is also directed-complete (Theorem 3.2). These correspondences allow us to apply ideas from argumentation to cooperative game theory, as we will in the next section.

4 Convex three-player Cooperative Games and Well-Founded Argumentation Frameworks

Having shown how abstract argumentation can be used in cooperative game theory, we now investigate the relationship between well-founded AFs and convex games, because in both cases the semantics (respectively, solution concepts) coincide.

4.1 Convex Games and Coincidence of Solution Concepts

An important type of cooperative game is that of a convex game [24]. Formally, is convex iff . Clearly, convex games are super-additive. This property is equivalent to [5, Proposition 2.8]: for all , if , then . Intuitively, this means as a coalition grows in size, there is more incentive for agents not already in the coalition to join. Shapley calls this a “band-wagon” effect [24]. Further, if and is convex, then is also convex.101010 This can be shown by writing out the definition of convex for the coalitions in given the definition of strategic equivalence in Section 2.2, and then applying the inclusion-exclusion principle for sets. The key property of convex games that we focus on is:

Theorem 4.1

[24, Theorem 8] If is convex, then its core is its unique stable set.

From our results in Section 3, we can immediately show the following.

Corollary 4

If is convex, then the set of unattacked arguments of its AF is the only stable extension.

Proof

Immediate from [6, Theorems 37 and 38].

Convex games exhibit a coincidence of the solution concepts so far considered.

Corollary 5

If is convex, then its core is also its supercore, which is also its unique subsoution.

Proof

If is convex, then viewing its abstract game as an AF, the set of unattacked arguments of this AF is the unique stable extension [6, Theorem 37]. But if is stable, then is also the grounded extension - else the grounded extension is not conflict-free. Therefore, is also the supercore of (Theorem 3.3). As is unique and the supercore, it is the unique subsolution of .

4.2 Well-Founded Argumentation Frameworks

Now recall from abstract argumentation theory we have a sufficient condition for an AF to have all four of Dung’s argumentation semantics coincide.

Theorem 4.2

[6, Theorem 30] If is well-founded, i.e. there is no -sequence111111 Let be a set, an -sequence is a function written as , so . such that [6, Definition 29], then its grounded extension is the unique stable, complete and preferred extension.

Given that the consequences of Theorem 4.2 and Corollaries 4 and 5 are the same, one may ask whether the AFs arising from convex games is in some sense “stronger” than well-founded AFs. Indeed, convex games refer to unattacked arguments , while well-founded AFs refer to the grounded extension, a superset of . Could it be that convex games always give rise to well-founded AFs? We answer this question in the following section.

4.3 Three-Player Convex Games

To make this problem more tractable, we specialise to three-player convex games. We will assume the following canonical form without loss of generality:

Theorem 4.3

[12, Slide 19] Every essential three-player game that is not necessarily constant-sum is strategically equivalent to the -normalised three-player game where if , , and for , , and .

Convexity constrains the parameters and as follows.

Corollary 6

Every essential three-player convex game that is not necessarily constant-sum is strategically equivalent to the -normalised game defined in Theorem 4.3 iff , and .

Proof

(Sketch) () If is convex and , then is convex (Footnote 10). There are distinct coalitions, which we can use to write out the inequality in the definition of convex to conclude the resulting three inequalities on , and . () If is a game such that , and satisfies the three inequalities, then is convex and any game strategically equivalent to it, in particular , is also convex.

Example 8

(Example 7 continued) Our game is convex, as , by Corollary 6. Therefore, the core of this game, calculated in Example 7 to be the set of imputations such that , and , is also the supercore, the only subsolution, and the only stable set.

The next result shows that three-player convex games do not always give rise to well-founded AFs.

Theorem 4.4

The game in Examples 1 to 8 is an essential, non-constant-sum three-player convex game whose AF is not well-founded.

Proof

Example 1 states this game is three-player, Example 8 states that this game is convex, and Example 3 states that this game is essential and not constant-sum. Let denote the abstract game from our examples, now seen as an AF. Consider the following -sequence where121212Unlike in Footnote 11, we write the natural number index of a general sequence term as a superscript in order to refer to each of the three payoff components of .

(1)

Clearly, this is a well-defined imputation for all , because the three components sum to 1 and each component is non-negative.

We now show that , and hence is not a well-founded AF. Indeed, we only need domination with respect to the coalition . For any , we have because (1) the agents 1 and 2 do strictly better, i.e. and , which in our case is

(2)

which is true for all . Furthermore, (2) agents 1 and 2 earn enough payoff after defecting such that they can strictly better, because

(3)

Equation 3 is equivalent to

(4)

which is true for all . Therefore, for of the convex game where . Therefore, the abstract game from our running example game seen as an AF is not well-founded.

Therefore, not all convex games give rise to well-founded AFs. This result clarifies that the coincidence of solution concepts due to convexity and the coincidence of argumentation semantics due to well-foundedness are of different natures.

5 Discussion and Related Work

In this paper, we have proved that for the uncountably infinite AFs that arise from cooperative games, the complete extensions (respectively the grounded extension) correspond to that game’s subsolutions (respectively, supercore), by Theorem 3.1 (respectively, Theorem 3.3). This allows for more results from argumentation theory to be applied to cooperative game theory, for example, the lattice-theoretic structure of the complete extensions (Theorem 3.2) or when the supercore is empty (Corollary 3). Both convex games and well-founded AFs result in a coincidence of, respectively, solution concepts and argumentation semantics, but convex games do not necessarily give rise to well-founded AFs (Theorem 4.4). To the best of our knowledge, these contributions are original.131313The authors have checked all papers citing [20], which is the first paper defining the supercore and subsolutions from the abstract game of a cooperative game, and found no papers on argumentation theory among them. These efforts strengthen the “correctness” of abstract argumentation by demonstrating its ability to reason about problems of societal or strategic concern.

Our first contribution completes the correspondence between Dung’s original four argumentation semantics with solution concepts in cooperative games. We can use this correspondence in future work to investigate the relationship between further abstract argumentation semantics not mentioned in [6] (e.g. those mentioned in [1]) and solution concepts in cooperative game theory. We can also investigate continuum AFs more generally, where the set of arguments , as cooperative game theory provides a natural motivation for them.

Our second contribution can be developed further. Intuitively, convex games do not have to be well-founded due to the continuum nature of the simplex and the order-theoretic underpinnings of domination, and hence one may divide extra payoffs into smaller and smaller units. But then one might justifiably ask whether it still makes sense for the first two agents to be be sensitive to infinitesimal improvements in their payoffs when is very large, and thus still maintain their desire to defect. We can attempt to answer this in future work.

There has been much work investigating the relationship between argumentation and game theory more generally. For example, Rahwan and Larson have used argumentation theory to analyse non-cooperative games [16] and study mechanism design [15]. Matt and Toni have applied von Neumann’s minimax theorem [26] to measure argument strength [14]. Riveret et al. investigate a dialogical setting of argumentation by representing the dialogue in game-theoretic terms, allowing them to determine optimal strategies for the participants [18]. Roth et al. articulate a prescriptive model of strategic reasoning in dialogue by using concepts from game theory [21]. Our paper is distinct from these as it applies ideas from argumentation theory to investigate cooperative games.

This paper builds on results from Dung’s seminal paper [6, Section 3.1]. There have been works applying ideas from cooperative game theory to non-monotonic reasoning and argumentation, specifically the Shapley value [22]. For example, Hunter and Konieczny have used the Shapley value to measure inconsistency of a knowledge base [8]. Bonzon et al. have used the Shapley value to measure the relevance of arguments in the context of multiagent debate [3]. The Shapley value, as a solution concept, is concerned with measuring the payoff to each agent given their marginal contribution in each coalition, averaged over all coalitions; we do not consider it here in this paper as we are concerned with solution concepts to do with defection rather than marginal contributions. Future work can build on the correspondences in this paper by considering which further solution concepts from cooperative games may be relevant for argumentation.

References

  • [1] Ringo Baumann and Christof Spanring. Infinite Argumentation Frameworks. In

    Advances in Knowledge Representation, Logic Programming, and Abstract Argumentation

    , pages 281–295. Springer, 2015.
  • [2] Olga N. Bondareva.

    Some Applications of Linear Programming Methods to the Theory of Cooperative Games.

    Problemy Kibernetiki, 10:119–139, 1963.
  • [3] Elise Bonzon, Nicolas Maudet, and Stefano Moretti. Coalitional games for abstract argumentation. In COMMA, pages 161–172, 2014.
  • [4] Jean-François Caulier. A note on the monotonicity and superadditivity of TU cooperative games. 2009.
  • [5] Georgios Chalkiadakis, Edith Elkind, and Michael Wooldridge. Computational Aspects of Cooperative Game Theory.

    Synthesis Lectures on Artificial Intelligence and Machine Learning

    , 5(6):1–168, 2011.
  • [6] Phan Minh Dung. On the Acceptability of Arguments and its Fundamental Role in Nonmonotonic Reasoning, Logic Programming and -Person Games. Artificial Intelligence, 77:321–357, 1995.
  • [7] Donald B. Gillies. Solutions to General Non-Zero-Sum Games. Contributions to the Theory of Games, 4(40):47–85, 1959.
  • [8] Anthony Hunter and Sébastien Konieczny. Shapley Inconsistency Values. KR, 6:249–259, 2006.
  • [9] Thomas Jech. Set Theory, the Third Millennium Edition, Revised and Expanded. Springer Monographs in Mathematics. Springer, Berlin, 2003.
  • [10] William F. Lucas. A Game with No Solution. Technical report, RAND Corporation, Santa Monica, California, 1967.
  • [11] William F. Lucas. The Proof that a Game may not have a Solution. Transactions of the American Mathematical Society, 137:219–229, 1969.
  • [12] John C. S. Lui. CSC6480: Advanced Topics in Network Analysis lecture 10 - Cooperative Games (2). www.cse.cuhk.edu.hk/~cslui/CSC6480/cooperative_game2.pdf, 2008.
  • [13] Andreu Mas-Colell, Michael D. Whinston, and Jerry R. Green. Microeconomic Theory, volume 1. Oxford university press New York, 1995.
  • [14] Paul-Amaury Matt and Francesca Toni. A game-theoretic measure of argument strength for abstract argumentation. In European Workshop on Logics in Artificial Intelligence, pages 285–297. Springer, 2008.
  • [15] Iyad Rahwan and Kate Larson. Mechanism Design for Abstract Argumentation. In Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems-Volume 2, pages 1031–1038. International Foundation for Autonomous Agents and Multiagent Systems, 2008.
  • [16] Iyad Rahwan and Kate Larson. Argumentation and Game theory. In Argumentation in Artificial Intelligence, pages 321–339. Springer, 2009.
  • [17] Iyad Rahwan and Guillermo R. Simari. Argumentation in Artificial Intelligence, volume 47. Springer, 2009.
  • [18] Régis Riveret, Henry Prakken, Antonino Rotolo, and Giovanni Sartor. Heuristics in argumentation: a game-theoretical investigation. In Proceedings of the 2nd international conference on computational models of argument, pages 324–335. IOS Press, 2008.
  • [19] Alvin E. Roth. A Lattice Fixed-Point Theorem with Constraints. Bulletin of the American Mathematical Society, 81(1):136–138, 1975.
  • [20] Alvin E. Roth. Subsolutions and the Supercore of Cooperative Games. Mathematics of Operations Research, 1(1):43–49, 1976.
  • [21] Bram Roth, Régis Riveret, Antonino Rotolo, and Guido Governatori. Strategic argumentation: a game theoretical investigation. In Proceedings of the 11th international conference on Artificial intelligence and law, pages 81–90. ACM, 2007.
  • [22] Lloyd S Shapley. A Value for -Person Games. Contributions to the Theory of Games, 2(28):307–317, 1953.
  • [23] Lloyd S. Shapley. On Balanced Sets and Cores. Naval Research Logistics Quarterly, 14(4):453–460, 1967.
  • [24] Lloyd S. Shapley. Cores of Convex Games. International Journal of Game Theory, 1(1):11–26, 1971.
  • [25] Lyn Carey Thomas. Games, Theory and Applications. Courier Corporation, 2012.
  • [26] John Von Neumann and Oskar Morgenstern. Theory of Games and Economic Behavior. Princeton university press, 1944.
  • [27] Anthony P. Young. Notes on Abstract Argumentation Theory. ArXiv preprint arXiv:1806.07709, 2018.