Local Aggregation in Preference Games

02/04/2020 ∙ by Angelo Fanelli, et al. ∙ National Technical University of Athens 0

In this work we introduce a new model of decision-making by agents in a social network. Agents have innate preferences over the strategies but, because of the social interactions, the decision of the agents are not only affected by their innate preferences but also by the decision taken by their social neighbors. We assume that the strategies of the agents are embedded in an approximate metric space. Furthermore, departing from the previous literature, we assume that, due to the lack of information, each agent locally represents the trend of the network through an aggregate value, which can be interpreted as the output of an aggregation function. We answer some fundamental questions related to the existence and efficiency of pure Nash equilibria.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

A significant volume of recent research investigates opinion formation and preference aggregation in social networks. The goal is to obtain a deeper understanding of how social influence affects decision making, and to which extent the tension between innate preferences and social influence could cause instability or inefficiency. The prevalent models are opinion formation games [BKO15] and discrete preference games [CKO18], whose principles go back to the classical works of [Deg74] and [FJ90] on opinion formation. Though seemingly simple, both models are natural and expressible, have apparent similarities and some crucial differences.

In a nutshell, both opinion formation and discrete preference games assume a finite population of agents and an underlying strategy space , which is the interval for opinion formation games and a finite set with at least two strategies for discrete preference games (this is one of the key differences of the two models). Each agent has a fixed preferred strategy and adopts a strategy in , which represents a compromise between her preferred strategy and the strategies of her social neighbors. Both models need to formally define (i) how to quantify preferences over strategies; and (ii) how much each agent is influenced by her social neighbors.

For the former, both models assume a distance function which quantifies dissimilarity between strategies. In opinion formation, is (i.e., the square of the norm, motivated by repeated averaging in [Deg74, FJ90]), while in discrete preference games, is any metric distance function (motivated by applications more general than opinion formation, as explained in [CKO18]).

Both models assume that the influence exercised by agent’s strategy on the strategy selection of agent is quantified by the influence weight . Previous work often distinguishes between the symmetric case, where , and the asymmetric case, where and may be different. Moreover, the confidence of each agent on her preferred strategy is quantified by the self-confidence . Another (less important) difference between opinion formation and discrete preference models is that most previous work on the latter (see e.g., [CKO18, ACF16]) usually assumes the same self-confidence for all agents and that each is either or .

The discussion above is elegantly summarized by the cost function of agent in a strategy profile :

Naturally, each agent selects her strategy so as to minimize this cost.

In the last few years, there has been considerable interest in equilibrium properties (e.g., existence, computational complexity, convergence, price of anarchy and stability) of opinion formation and discrete preference games (and several variants and generalizations of them). The main message is that opinion formation games are well-behaved due to their single-dimensional continuous strategy space, while discrete preference games exhibit more complex behavior. Specifically, opinion formation games admit a unique equilibrium which can be computed efficiently and has small price of anarchy in the symmetric case (see e.g., [BKO15, BGM13, GS14]). In discrete preference games, an equilibrium with low price of stability exists, but there might exist multiple equilibrium points, some with unbounded price of anarchy [CKO18]; moreover, finding an equilibrium can be computationally hard [LBN19], while strategies at equilibrium may be significantly different from the agents’ preferred strategies.

Motivation. An important assumption underlying virtually all results above is that each agent is fully aware of the strategies of her social neighbors and responds optimally to them. This assumption is crucial for establishing uniqueness and convergence to equilibrium in opinion formation (see e.g., [GS14]) and forms the basis for the price of anarchy and stability results (see e.g., [BKO15, BGM13, BFM18, CKO18]). However, having access to the strategies of all one’s neighbors is questionable in real life and in modern large online social networks, just because getting to know these strategies requires a large amount of information exchange. For (a somewhat extreme) example, imagine that one does not crystallize her opinion on a topic before she has extensively discussed it with all her acquaintances!

Therefore, the assumption that agents form their strategies based on explicit (and complete) knowledge of the strategies of all their neighbors has been criticized, especially in the context of opinion formation games and their best response dynamics. Researchers have studied opinion formation with restricted information regimes [UL04] and opinion dynamics where agents learn the strategy of a single random neighbor in each round [FPS16, FKKS18]. The goal is to understand the extent to which limited information about the strategies of one’s social neighbors affects the properties of equilibria in opinion formation.

In this work, we take a different, and to the best of our knowledge, novel approach towards addressing the same research question. Instead of focusing on best response dynamics and assuming that the strategies of a small random set of neighbors are available in each round, we assume that each agent has access to a single representative strategy, which can be regarded as the output of an aggregation function

that condenses the strategies of her neighbors into a single one. This aggregate strategy could be provided by the network (in case of online social networks, e.g., Facebook or Twitter could provide a summary of the main posts of one’s friends on a specific topic) or could be estimated by a poll. To further motivate our approach, we may think of traditional political voting, where voters have fixed innate preferences over the candidates, but polls (which is a form of preference aggregation) might cause the voters to change their vote (see also

[EFHS19], where the opinion formation model involves an estimation of the average opinion of all agents).

Preference Games with Local Aggregation: Our Model. We introduce a very general game-theoretic model of decision-making by agents in a social network, which we refer to as preference games with local aggregation (or simply preference games, for brevity). Similarly to opinion formation and discrete preference games, each agent selects a strategy trying to be faithful as much as possible to her preferred strategy (which is immune to the choices made by the other agents) and, at the same time, to blend with the environment. But in our model, the environment is summarized by the aggregate strategy of her social neighbors.

The basic formal setting of preference games with local aggregation is inspired by that of discrete preference games (but there are some crucial differences, as we explain below). As in previous work, we consider a strategy space which is the same for all agents, but in preference games with local aggregation, may be either discrete or continuous. Another important difference from previous work is that we allow preferred strategies not to belong to . This allows the agents to have more elaborate preferred strategies and to account for preferences that possibly cannot be fully disclosed in public. So, we assume a strategy “universe” and that (e.g., may be discrete, while may be the convex hull of ). We say that the game is restricted, if the preferred strategy for all agents , unrestricted, if for all agents , and semi-restricted, otherwise.

As in discrete preference games, we assume a distance function , which quantifies dissimilarity between strategies. But, instead of an exact metric, we let be an approximate metric so as to also allow for other natural dissimilarity functions, such as . Moreover, we assume the same self-confidence level for all agents and that influence weights are not necessarily symmetric, have and are normalized so that , i.e., the total influence exercised to any agent sums up to (in [CKO18], influence weights are symmetric and not normalized).

The major new key ingredient is an aggregation function which, for each agent , maps the strategies of all agents other than and the influence weights on to an aggregated strategy that “summarizes” the strategies in . We usually write , instead of , for brevity. Typical aggregation functions are the mean (the best response function in opinion formation) and the median (the best response function in discrete preference games). However, most of our results hold for general aggregation functions that satisfy unanimity (i.e., if , then ) and consistency (i.e., if and satisfy , then ). We refer to aggregation functions that satisfy unanimity and consistency as feasible.

In preference games with local aggregation, the cost of agent in a strategy profile is:

(1)

Namely, the cost of agent is a convex combination of her innate cost and her disagreement cost . As usual, each agent selects her strategy so as to minimize . The crucial difference from opinion formation and discrete preference models is that strategy selection of agent solely depends on , which is a single strategy in

, and not on the entire strategy vector

. An interesting direction for future research is to assume that may belong to .

Contribution. The conceptual contribution of our work is the new model of preference games with local aggregation. The model is mostly inspired by discrete preference games, but is quite general and allows for a systematic study and a new perspective on the fundamental question of how much limited information about the preferences of one’s social neighbors affects the equilibrium properties in opinion formation and preference aggregation in social networks. On the technical side, we provide a comprehensive set of general results on the existence and the structure of equilibria and on the price of anarchy of preference games with local aggregation. Our results hold for any approximate metric distance function and any feasible aggregation function . A general message of our results is that low self-confidence levels (i.e., ) help with the existence of equilibria and simplify their structure, while high self-confidence levels (i.e., ) help with the price of anarchy. Moreover, our price of anarchy analysis for implies novel bounds on the price of anarchy of certain variants of opinion formation and discrete preference games.

Specifically, in Section 3 we show that if , consensus (i.e., a state where all agents adopt the same strategy) is a pure Nash equilibrium of preference games with local aggregation (Theorem 1); if , we show that the state in which each agent adopts her preferred strategy is a pure Nash equilibrium, and moreover such equilibrium is unique when (Theorem 2). The above two results hold under the more stringent assumption that is an exact metric. Existence of pure Nash equilibrium for restricted games with requires more assumptions (e.g., on the strategy space, the aggregation function, or the distance function) and is an interesting direction for further research.

In Section 4 we consider the price of anarchy with respect to the total cost and the maximum cost of the agents. We observe that if , the cost of every agent at equilibrium is and therefore the price of anarchy is . On the other hand, if , the price of anarchy of restricted and unrestricted games can be unbounded for both objectives (Proposition 3 and Proposition 4). So, if self-confidence level is low, the price of anarchy of preference games with local aggregation behaves similarly to that of discrete preference games [CKO18] and of opinion formation games with binary strategies [FGV16]. Interestingly, if , we show that the price of anarchy is bounded from above by , for the total cost (Theorem 3), and by , for the maximum cost (Theorem 4). We believe that all the three parameters , and used in these bounds are intuitive and of interest; all of them are formally defined in Section 2. is the maximum social impact of an agent . Since , quantifies the asymmetry between the influence received and exercised by any agent in the social network. is the so-called maximum boundary of any agent. The boundary of agent quantifies how much closer a strategy can be to ’s preferred strategy compared against an equilibrium strategy of . We believe that is a natural parameter and can be exploited in the proof of price of anarchy/stability bounds for other generalizations of discrete preference games. The most interesting parameter is the stretch , which directly quantifies how much we lose, in terms of equilibrium efficiency, because we only have access to an aggregate of the strategies in one’s social neighborhood. To better demonstrate this point, let’s assume that the aggregation function is the (weighted) median. Then, in a standard discrete preference game, the best response of an agent is the weighted median of ’s preferred strategy and ’s social neighbor strategies in . In a preference game with local aggregation, on the other hand, receives the weighted median and computes her best response as a weighted median of and . Communication efficient as it is, the latter may lead to more inefficient equilibria, since a small change in might cause a significant change in . The extent to which this can happen is quantified by .

In Section 5, we provide upper bounds on and ( is always upper bounded by ), which implies that the price of anarchy of preference games with local aggregation is always bounded if . Interestingly, our bounds on depend only on , while our bounds on may depend on the metric space, the influence weights and the aggregation function.

In Section 6, we introduce a specific preference game, which is motivated by -approval voting and generalizes opinion formation with binary strategies [FGV16]. The strategy space consists of all binary strings of length with ones, preferred strategies lie in the convex hull of , local aggregation is the weighted median, and the distance function is . Since is a -approximate metric in (but it is equivalent to the Hamming distance when restricted to ), results from Section 4 carry over this special case. The main result of this section is an upper bound on (Theorem 6), which implies an interesting upper bound on the price of anarchy for all . An intriguing direction for further research is to determine under which assumptions -approval voting game admits pure Nash equilibria for .

Other Related Work. Discrete preference games were introduced in [CKO18], where it was shown that they are potential games, that the price of anarchy can be unbounded and that the price of stability is at most . Moreover, the properties of discrete preference games for richer metrics, such as tree metrics, were studied. Recently, [LBN19] proved that computing a pure Nash equilibrium of discrete preference games is PLS-complete, even in a very restricted setting. Discrete preference games were generalized in [ACF16] and consistency between preferred strategies and equilibrium strategies were considered in [ACF17].

Our -approval voting model bears some resemblance to opinion formation with binary preferences [FGV16]. They proved that the price of anarchy is unbounded for . Our analysis complements theirs by showing that the price of anarchy for is at most .

Aggregating preferences under some metric function has been received attention in algorithms (see e.g., [ACN08]) and in social choice (see e.g., [ABE18]). Ours is the first work where aggregation under metric dissimilarity functions is used in modelling decision-making by agents in a social network.

2 Notation, Definitions and Preliminaries

Most of the notation and the model definition are introduced in Section 1. Next, we introduce some additional notation and discuss some important preliminaries.

We recall that , with , is the set of agents, is the strategy universe, and , with , is the strategy space of the agents. is the set of the agents’ preferred strategies, and is the set of agents with preferred strategy in . So, denotes an unrestricted game, while denotes a restricted one. We recall that is the amount of influence agent imposes on agent . Unless stated otherwise, may be different than . We always assume that and that . We recall that is the confidence-level of the agents. We say that the agents are stubborn, if , and compliant, otherwise.

We refer to any as a state of the game. For any state and any strategy , we let be the new state obtained from by replacing its -component with and keeping the remaining components unchanged. If , we say that is a consensus on .

We say that is a -approximate metric, for some , if it satisfies (i) , for all ; (ii) symmetry, i.e., , for all ; and (iii) (approximate) triangle inequality, i.e., , for all . We say that is an exact metric (or simply metric) if . We say that is uniform if it is an exact metric such that for all . We assume that is a -approximate metric, for some , unless stated otherwise.

Aggregation Functions. We consider aggregation functions that satisfy (i) unanimity, i.e., if is a consensus on , then ; and (ii) consistency, i.e., for all , with , . We say that such aggregation functions are feasible. In this work we focus on feasible aggregation functions. When is an exact metric on , notable examples of feasible aggregation functions are the Fréchet mean and the Fréchet median.

Given any state , the Fréchet mean of agent in , denoted by , is any strategy in that minimizes the weighted sum of its squared distances to the strategies in . Formally,

(2)

The Fréchet median of agent in , denoted by , is any strategy that minimizes the weighted sum of its distances to the strategies in  :

(3)

We can show that the Fréchet mean and the Fréchet median are indeed feasible aggregation functions.

The following proposition shows that both aggregation functions are feasible.

Proposition 1.

The Fréchet mean and the Fréchet median are feasible aggregation rules.

Proof.

We prove the statement for the Fréchet mean. A virtually identical argument applies to the Fréchet median.

Let be any agent, and and be any pair of states. It is straightforward to verify unanimity, namely, that if is a consensus on , then . In fact, every term of the summation would be in .

We proceed to prove consistency. Let us assume that . If for all coordinates , then and are identical and . So, let us assume that for some coordinates , . Since , it must be for all coordinates with . Equivalently, for every with , we have that . Therefore, for every , it holds that

which implies that . ∎

Pure Nash Equilibria, Social Optima and Price of Anarchy. A pure Nash equilibrium (or equilibrium, for brevity) is a state such that for every agent and every strategy , . denotes the set of all pure Nash equilibria of a given preference game. A strategy is a best response of agent to a state , if . We say that a strategy is a (strictly) dominant strategy of agent , if (, if strictly), for all states and strategies .

We measure the efficiency of each state according to a social objective. We consider two social objectives, the social cost and the maximum cost . A state is optimal wrt. Sum, if , for all states . We denote by the set of optimal states wrt. Sum, i.e., . Similarly, a state is optimal wrt. Max, if , for all states . We let be the set of all optimal states wrt. Max.

The price of anarchy of a game wrt. Sum is , if for some state . If , then , if , and , if . The definition of is similar.

Equivalence and Relative Distance between States. For every pair of states , , is the set of agents with different strategies in and . If , we say that and are globally equivalent. For every agent and all pairs of states , , we define the relative distance of and for as . If , and are equivalent for . We observe that implies (while converse may not be true). Moreover, , for all strategies and .

Social Impact, Stretch and Boundary. The social impact of agent is and quantifies the intensity by which influences the environment. The global social impact is . We observe that and . We refer to the case where influence weights are symmetric, i.e., for all and , and as the fully symmetric case.

The stretch of agent quantifies how sensitive the aggregation function is wrt. changes in the state of the game. Formally, we let

We define , so as to account for the case where and (recall the definition of feasible aggregation functions). Therefore, existence of is always guaranteed and is well defined. Since is used in the price of anarchy bounds, we can restrict its definition to optimal states , instead of arbitrary states . We use this more restricted definition in the proofs of propositions 7 and 8, in Section 5.

The (global) stretch is  . At the conceptual level, the stretch quantifies how much the price of anarchy increases because agents only have access to an aggregate of the strategies in , instead of itself.

The boundary of agent , denoted by quantifies how much closer a strategy can be to compared against an equilibrium strategy of . Formally,

(4)

For nontrivial games, there always exists a strategy with . Thus, is well defined. The (global) boundary is  .

3 Existence of Pure Nash Equilibria

We first characterize the best responses of compliant and stubborn agents. The proof of Lemma 2 is analogous to that of Lemma 1. We remark that the results in this section hold only under the assumption that is an exact metric.

Lemma 1.

If (resp. ), is a (resp. the unique) best response of agent to , for every state .

Proof.

Let us assume . Let be any agent and be any state. Let be any strategy in (recall that ). Since , we have

(5)
(6)

where (5) follows from the triangle inequality and (6) from , which implies that . If , (6) is strict. Hence, is the unique best response of . ∎

Lemma 2.

If (resp. ), is a (resp. the unique) dominant strategy of any agent .

Proof.

Let us assume . Let be any agent in and be any state with (recall that ). Since , we have

(7)
(8)

where (7) follows from the triangle inequality and (8) follows from , which implies that . If , then (8) is strict. Therefore, is a strictly dominant strategy of agent . ∎

Theorems 1 and 2 are consequences of lemmas 1 and 2. Characterizing existence of pure Nash equilibria when and the game is (semi-)restricted () is an interesting direction for further research.

Theorem 1.

If , any consensus is an equilibrium.

Theorem 2.

If (resp. ) and the game is unrestricted () then is an equilibrium if (resp. and only if) for all agents .

4 Price of Anarchy

Compliant Agents. We first consider the case of compliant agents, where . If , the price of anarchy is . On the other hand, for , the price of anarchy is either unbounded or .

Proposition 2.

If , .

Proof.

Let us assume . The cost incurred by any agent in any state is . By Lemma 1, the cost incurred by any agent at any equilibrium is , from which the claim immediately follows. ∎

Proposition 3.

For , if the game is restricted (), there exist instances for which both and are unbounded.

Proof.

Let us assume . Let us consider the set of instances with , and , (i.e., , for every ), where are three distinct elements. Notice that . Since are three distinct elements, the distance between every two elements of is non-negative. Let the distance between and be an arbitrarily small positive number, i.e., . By Theorem 1, the state in which every agent choses is a pure Nash equilibrium. Therefore, and . On the other hand, the cost of every agent in state , in which every agent choses , is , where the second equality follows the fact that and the third from the fact that . Hence and . We can conclude that for this instance . Since is an arbitrarily small positive number, the claim follows. ∎

By slightly modifying the proof of Proposition 3 and by letting and coincide, i.e., , we obtain a set of instances for the unrestricted game in which , from which the following proposition immediately follows.

Proposition 4.

For , if the game is unrestricted (), there exist instances with .

Stubborn Agents. We proceed to the case of stubborn agents, where . In Theorem 3 and Theorem 4, we show general bounds on the price of anarchy that depend on , , and . The proof of Lemma 3 follows from the equilibrium condition, the triangle inequality and the definition of stretch. Lemma 4 follows from Lemma 2.

Lemma 3.

For every agent , equilibrium and state ,

Proof.

Using that is an equilibrium state, we have

where the first inequality follows from the equilibrium condition, the second from the triangle inequality, and the last from the definition of stretch. Notably, the proof only requires that the restriction of to satisfies the triangle inequality. ∎

Lemma 4.

If , every equilibrium and optimal state with , we have , where .

Proof.

Let us assume . Let be any agent in . We prove the claim by showing that belongs also to . If then, by Theorem 2, . Since , this implies that , i.e., . If then does not belongs to . Since belongs to , this trivially implies that , i.e., . ∎

Theorem 3.

If ,  .

Proof.

If , then . Otherwise, let be any equilibrium and be any optimal state with . We have

(9)
(10)
(11)

where (9) follows from Lemma 3, (10) from the definition of and , and (11) from the definitions of Sum, and the social impact of .

On the other hand,

(12)
(13)

where (12) follows from Lemma 4 and (13) from the definition of boundary (4). Notice that the last expression is strictly larger than because , and for every .
Therefore, we can conclude that

(14)
(15)
(16)

where (14) follows from (11), (15) from (13) and (16) from the definitions of and . ∎

Theorem 4.

If ,

Proof.

If , then trivially .

Otherwise, let and be two states such that . We recall that . Let be one of the agents with maximum cost at equilibrium, i.e., . We have

(17)