# Computational Aspects of Equilibria in Discrete Preference Games

We study the complexity of equilibrium computation in discrete preference games. These games were introduced by Chierichetti, Kleinberg, and Oren (EC '13, JCSS '18) to model decision-making by agents in a social network that choose a strategy from a finite, discrete set, balancing between their intrinsic preferences for the strategies and their desire to choose a strategy that is `similar' to their neighbours. There are thus two components: a social network with the agents as vertices, and a metric space of strategies. These games are potential games, and hence pure Nash equilibria exist. Since their introduction, a number of papers have studied various aspects of this model, including the social cost at equilibria, and arrival at a consensus. We show that in general, equilibrium computation in discrete preference games is PLS-complete, even in the simple case where each agent has a constant number of neighbours. If the edges in the social network are weighted, then the problem is PLS-complete even if each agent has a constant number of neighbours, the metric space has constant size, and every pair of strategies is at distance 1 or 2. Further, if the social network is directed, modelling asymmetric influence, an equilibrium may not even exist. On the positive side, we show that if the metric space is a tree metric, or is the product of path metrics, then the equilibrium can be computed in polynomial time.

## Authors

• 2 publications
• 7 publications
• 5 publications
• 1 publication
• 1 publication
• ### Local Aggregation in Preference Games

In this work we introduce a new model of decision-making by agents in a ...
02/04/2020 ∙ by Angelo Fanelli, et al. ∙ 0

• ### Reinforcement Learning and Nonparametric Detection of Game-Theoretic Equilibrium Play in Social Networks

This paper studies two important signal processing aspects of equilibriu...
12/11/2014 ∙ by Omid Namvar Gharehshiran, et al. ∙ 0

• ### The Convergence of Iterative Delegations in Liquid Democracy in a Social Network

Liquid democracy is a collective decision making paradigm which lies bet...
04/10/2019 ∙ by Bruno Escoffier, et al. ∙ 0

• ### Empirical strategy-proofness

We study the plausibility of sub-optimal Nash equilibria of the direct r...
07/29/2019 ∙ by Rodrigo A. Velez, et al. ∙ 0

• ### Multiple Oracle Algorithm For General-Sum Continuous Games

Continuous games have compact strategy sets and continuous utility funct...
09/09/2021 ∙ by T. Kroupa, et al. ∙ 0

• ### Dynamic Games among Teams with Delayed Intra-Team Information Sharing

We analyze a class of stochastic dynamic games among teams with asymmetr...
02/23/2021 ∙ by Dengwang Tang, et al. ∙ 0

• ### Empathy in Bimatrix Games

Although the definition of what empathetic preferences exactly are is st...
08/06/2017 ∙ by Brian Powers, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Networks are a growing presence in our lives, and affect our behaviour in complex ways. A large amount of literature attempts to understand various facets of these networks. The literature is diverse, due to the large variety of networks and their myriad effects on our daily lives. Prominent among these is the work on opinion formation in social networks [5, 18]; algorithms to target agents in a network to promote adoption of a product [14, 20]; and models that accurately capture the special structure of social networks [6, 23].

We study a model of opinion formation in social networks. In a basic but commonly studied model of opinion formation, each agent in the network holds a real-valued opinion, such as her political leaning, and is influenced by her neighbours in the social network. Under the influence of her neighbours, in each time step she updates her opinion to the weighted average of her opinion and that of her neighbours. In a game-theoretic setting, this is a coordination game, where players try to coordinate their opinion with their neighbours. Probabilistic models of updation, where the opinions are from the discrete set are also studied [12]. Much of the work in opinion formation focuses on conditions for consensus, when all agents eventually hold the same opinion (e.g., [1, 5]). Clearly, however, consensus is not always attained in social networks, and the basic model has been extended in different ways to capture this lack of consensus (e.g., [24, 3, 11]).

Further, most work focuses on the case where the opinion of an agent is either binary (in the set ), or in the interval . These are clearly important, since opinions in many cases (e.g., political leanings, or product adoption) are captured by these sets. However, often more complex sets are required. As an example, a person’s political leaning is often a composite of her inclinations on various topics, such as economic inequality, foreign policy, and the tax regime. A more realistic model would consider a person’s opinion as a composite of these individual opinions, and update accordingly. As a second example, a person’s opinion could be a physical location, such as a choice of which neighbourhood to live in. In this case, the set of strategies would be more complex, and the update process would select the geometric median of the neighbour locations. Another example would be in understanding technology adoption from among different platforms such as Android, iOS, Blackberry, etc. The set of strategies are now discrete points, with distances corresponding to the cost of switching from one technology platform to another.

We study a particular model for opinion formation called a discrete preference game [10, 11].222A similar model was concurrently studied by Ferraioli et al. [17], however with binary strategies. Both these papers give a natural polynomial time algorithm for equilibrium computation with binary strategies. In this model, an agent can hold one of a finite set of strategies (opinions), and a distance function gives the distance between any pair of strategies. A natural restriction on the distance function is that it be a metric, and hence the strategies are points in a metric space. In addition, each agent has an intrinsic preferred strategy which is fixed. The cost of each agent for a strategy is the sum of weighted distances to her neighbours and to her preferred strategy. The presence of preferred strategies leads to the absence of consensus as an equilibrium in general [21]. Further, the representation of strategies as points in a metric space allows modelling of many complex situations, beyond the simple settings studied earlier.

Since their introduction, numerous papers have studied various properties of these games, including bounding the ratio of the total cost of equilibria to the minimum total cost (called the Price of Anarchy or Stability), as well as generalisations [4, 11]. In a natural updation process, each player in her turn chooses a strategy that minimizes her cost. While it is known that this updation process leads to an equilibrium, the number of turns required may be exponential in the size of the game.

In this work, we study computational aspects of equilibria in discrete preference games. Equilibrium computation is a fundamental problem in computational game theory, and the lack of efficient algorithms for this is often viewed as a stumbling block to the notion of equilibria as a prediction of player behaviour (e.g.,

[13]). Algorithms for computing equilibria are also useful, e.g., in simulations to study properties of equilibria, or to obtain approximations to the global optimum for the underlying distance-minimization problem (e.g., [8]).

Coordination games on graphs are another model closely related to discrete preference games [2, 3]. In these games, agents attempt to coordinate with their neighbours, however the set of strategies available to each player is restricted. The distance between any pair of strategies is 1, and hence these are similar to discrete preference games with the discrete metric.

### Our Contribution

We present our results informally here. Formal definitions and results are given in later sections.

We first show that equilibrium computation in discrete preference games is hard, even if we restrict the number of neighbours that each agent has in the social network.

###### Result 1.

It is PLS-hard to find an equilibrium in discrete preference games, even when each player has constant degree in the social network.

If we allow the edges in the social network to be weighted, modelling varying degrees of influence by the neighbours, then equilibrium computation is hard even with multiple restrictions on the metric space.

###### Result 2.

In weighted discrete preference games, it is PLS-hard to compute an equilibrium, even when each player has constant degree in the social network, the number of strategies is constant, and the distance between any pair of strategies is one or two.

Our results are interesting because these are examples where equilibrium computation is hard in a purely coordination game. In previous games where hardness was shown for equilibrium computation, there were incentives for anti-coordination, i.e., players had an incentive to choose strategies different from their neighbours (e.g., local max-cut games [22], congestion games [16], and even coordination-only polymatrix games  [9]).

Lastly, we show that if we allow the edges in the social network to be directed, then an equilibrium may not even exist (and hence, the update process described may cycle).

###### Result 3.

In a discrete preference game with directed edges, an equilibrium may not exist.

We note that directed edges in social networks are clearly more general, and allow the model to capture asymmetric influences. E.g., Facebook offers one the ability to ‘follow’ another person, which is an asymmetric method of influence. Both undirected and directed social networks are commonly studied (e.g., [2, 3, 7, 24]).

In our example to show nonexistence of equilibria, the social network consists of a single strongly connected component. In coordination games on graphs, it is known that if the social network has a single strongly connected component then an equilibrium always exists [3]. Our work thus shows this does not hold if we allow more complicated metric spaces.

We show, however, that in two particular cases, an equilibrium can be computed in polynomial time.

###### Result 4.

If the metric space is a tree metric, or is the Cartesian product of path metrics, an equilibrium can be computed in polynomial time.

The case of tree metrics was earlier studied, and bounds shown on the Price of Stability [11]. The authors also motivate tree metrics by an example of students choosing a major in college, when different subjects follow a hierarchy for proximity. The product metric space roughly corresponds to the case when the metric space is a regular grid. A natural scenario that is modelled by the product metric is the case presented in the introduction, where an agent’s strategy is a composite of a number of individual opinions, and the distance between two strategies is the sum of distances for each individual opinion.

Our algorithms for these cases are simple, however, they obtain equilibria in substantial generalizations of discrete preference games as well, when the social network is a weighted directed graph, and instead of having a single preferred strategy, agents have multiple preferred strategies with different weights for each. Thus, this result also shows the existence of equilibria in directed discrete preference games, with these metric spaces.

## 2 Preliminaries

In the basic model, a discrete preference game consists of an undirected, unweighted neighbourhood graph representing the social network of players, and a metric space  [11]. Here, is the set of strategies, and is a distance metric — is a function on pairs of strategies that satisfies: (i) iff , and is positive otherwise, (ii) , and (iii) . Each player has a preferred strategy . Since the strategies exist in a metric space, we will also refer to the strategies as points in the metric space. We will use for the strategy of the th player, for a strategy profile, and for the strategies of all players except .

Given a parameter and a strategy profile , the cost for player is:

 ci(z)=αd(si,zi)+(1−α)∑j∈Nid(zi,zj),

where is the set of neighbours of , not including herself. Thus the cost of a strategy for player is times the distance from her preferred strategy, plus times the total distance from her neighbours. Each player tries to minimise her cost, and hence tries to choose a strategy that is the weighted median of her preferred strategy and the strategies of her neighbours.

We also study two natural generalisations of the basic model of discrete preference games. In the first generalisation, we allow weights on the edges of the neighbourhood graphs. This models the realistic scenario when different neighbours of a player have different levels of influence on her actions. In this case, for player , the strategy profile has cost:

 ci(z)=wid(si,zi)+∑j∈Niwijd(zi,zj),

where is the weight player places on her preferred strategy, and is the weight on the undirected edge .

In the second generalisation, we allow edges to be directed as well as weighted. This naturally models the case when influences are asymmetric: e.g., Facebook, in addition to the option of adding a person as a friend, offers one the ability to ‘follow’ another person, which is an asymmetric method of influence. In this case, the expression for the cost of player for strategy profile remains unchanged, though the neighbours of player are those players that have edges from in the neighbourhood graph.

An equilibrium is a strategy profile where no player can deviate to a different strategy and reduce her cost. We are interested in algorithms for computing equilibria in discrete preference games. In the weighed setting, these games are exact potential games. That is, for every weighted discrete preference game, there is a potential function of the strategy profile which has the property that if player deviates from a strategy profile, then the change in player ’s cost is exactly the change in the potential function as well. It can be verified that the potential function for the weighted setting is:

 Φ(z)=∑i∈Vwid(si,zi)+∑{i,j}∈Ewijd(zi,zj). (1)

A finite potential game always has an equilibrium, since at the minimum of the potential function, no player has a deviating strategy that reduces her cost. Thus, undirected weighted discrete preference games always possess an equilibrium. Further, best response dynamics — where in each step, a player chooses her minimum cost strategy in response to other players, and deviates to it — converges to an equilibrium, since in each step the potential function decreases.

However, best-response dynamics may, in general, take exponential time to converge to an equilibrium. We are interested in efficient algorithms for equilibrium computation, that for some polynomial run in time where is the size of input, and returns an equilibrium. This is the subject of Section 4.

In Section 3 we show that in general, the problem of equilibrium computation is hard, by showing that even in many simple cases, equilibrium computation is PLS-complete. The class PLS, for Polynomial Local Search, was introduced to study the complexity of finding a local minimum for problems where local search can be carried out in polynomial time [19]. Discrete preference games fall in this class, since finding the equilibrium is equivalent to finding a local minimum for the potential function . The locality of a strategy profile is the set of all profiles where a single player deviates. By finding the cost of each deviation, for each player, we can obtain a solution with lower value for the potential in polynomial time, if it exists.

A problem is PLS-complete if it is in PLS and is PLS-hard. PLS-hardness of a problem means that all problems in the class PLS can be polynomially reduced to this problem. Many problems are by now known to be PLS-complete, including local max-cut, max-2SAT, and equilibrium computation in congestion games [16, 22].

## 3 Hardness of Computing Equilibria

We start with two simple cases when an equilibrium can be computed in polynomial time. Firstly, if the parameter , then in any instance where the neighbourhood graph is connected, the following is an equilibrium: all players choose the same strategy . If the neighbourhood graph is disconnected, then each isolated player chooses its preferred strategy, while all players in a connected component choose the same strategy. Secondly, in weighted preference games if the weights on the edges as well as the distance between any two strategies are bounded (above and below) by polynomials in the size of the input , then the equilibrium can be computed in polynomial time by best-response dynamics. In this case, the potential is bounded from above by a polynomial in , and in each best-response step, the potential also reduces by a polynomial in . Hence best-response dynamics converges to a local minima of the potential function in polynomial time, which is also an equilibrium.

Despite these results, we show that equilibrium computation is in general hard in discrete preference games, even in simple settings. Specifically, we show that in the unweighted setting, for any , computing an equilibrium is PLS-complete even when each player has constant degree. In the weighted setting, computing an equilibrium is PLS-complete even when each player has constant degree, the number of strategies is constant, and the distance between every pair of strategies is either or . For directed neighbourhood graphs, we show that an equilibrium may not even exist.

For the hardness results, we show a reduction from the local max-cut game. In a local max-cut game, we are given an undirected weighted graph with vertices. Vertices correspond to players, and each player has two strategies and . The utility of a player is the sum of weights of edges to players that choose the strategy different from , i.e., . Equilibrium computation in the max-cut game is known to be PLS-complete, even if each player has degree five [15].

For any in the unweighted setting, it is PLS-hard to find an equilibrium in discrete preference games, even when each player has constant degree.

###### Proof.

Given an instance of local max-cut with weights on the edges, we construct an instance of a discrete preference game where the strategies are in correspondence with the local max-cut game, and in fact the cost in the discrete preference game is exactly a constant minus the utility in the max-cut game. Let be the number of players in either game, and . We make two assumptions: that each player can be restricted to a subset of strategies, and that some players do not have a preferred strategy. We first describe the reduction under these assumptions, and later show how these assumptions can be removed. With these assumptions, we choose our neighbourhood graph . The strategy set contains two strategies and for each player . We assume that is restricted to these two strategies. Thus, . Finally, if , then = . The distance between any other pair of strategies is . Thus if players and both play and , or and , their distance is .

Figure 1 shows the reduction for an instance of max-cut with three vertices , and .

Note first that the set of players is identical in both games. For every strategy profile in the max-cut game, there is a strategy profile in the discrete preference game where player plays if she plays in the max-cut game, and plays otherwise. Then it is easy to see that the cost of player is . There is thus a correspondence between strategy profiles in the two games, and the cost in one is a constant minus the utility in the other. It follows that is an equilibrium in the max-cut game if and only if (as constructed above) is an equilibrium in the discrete preference game.

We now discuss how to remove the two assumptions. Our first assumption is that a player can be restricted to two strategies. To remove this, for each player , we introduce 20 players: , and . We call these auxiliary players. Each of these has an edge to player in the neighbourhood graph, and thus has degree 1. Auxiliary players have as their preferred strategy, while auxiliary players have as their preferred strategy. Since they have degree 1, and , the best response for these players is always to play their preferred strategy. Now note that since each non-auxiliary player has degree 25 in the neighbourhood graph, if player plays either or , her cost is at most . However if player plays a strategy other than or , her cost is at least . Hence her best response is always to play either or . Further, since the auxiliary players for player are equally distributed with or as the preferred strategy, their addition does not affect player s choice of strategy between the two, which depends on the strategies chosen by the non-auxiliary players.

Our last assumption is that the non-auxiliary players do not have a preferred strategy. This is removed by introducing another point into the metric space, which has distance from all other strategies, and which is the preferred strategy for all non-auxiliary players. However, if is very large, then it would be an equilibrium for all players to choose . To fix this, increase the number of auxiliary players for each player from to . It can be checked that in this case, player s best response is always to play either or . We note that, each player now has degree at most , which is a constant for fixed . ∎

We now show that if the edges in the neighbourhood graph are weighted, equilibrium computation is hard even in simpler settings.

In the weighted setting, it is PLS-hard to compute an equilibrium, even when each player has constant degree in the neighbourhood graph, the strategy set has constant size, and the distance between any pair of strategies is either one or two.

###### Proof.

As before, given an instance of local max-cut with weights on the edges and degree five for each vertex, we construct an instance of a discrete preference game where the strategy profiles are in correspondence with the local max-cut instance. Let be the number of players and . We first describe the reduction under the assumption that each player can choose one of only two strategies, and later show how the assumption can be enforced without loss of generality. With this assumption, we choose our weighted neighbourhood graph .

To construct the metric space, we use the fact that a graph of maximum degree five can be properly coloured by a greedy algorithm with six colours. That is, every vertex in the graph can be assigned one of six colours, so that if vertices , are adjacent in the graph, then they are assigned different colours. Thus, the neighbourhood graph can be coloured with six colours. Let denote the colour assigned to vertex . Let , , , , , and be the six colours used.

Our metric space consists of 12 strategies, . We call the first component the parity of the strategy, and the second component the colour of the strategy. The distance between two points is 1 if the parity of the points is different, and is 2 otherwise. We assume that each player is restricted to the two strategies in the metric space coloured . Note that this means that for a player , since all of her neighbours have a different colour, they cannot be at the same point in the metric space as . Hence the cost of is at least . Further, it is easily seen that in any strategy profile, the cost of a player is minus the weight of the neighbours of that play the parity different from ’s strategy.

For every strategy profile in the max-cut game, there is a strategy profile in the discrete preference game where player plays if she plays in the max-cut game, and plays otherwise. Then the cost of player in the discrete preference game is . There is thus a correspondence between strategy profiles in the two games, and the cost in one is (a constant plus) the negative of the utility in the other. It follows that is an equilibrium in the max-cut game if and only if (as constructed above) is an equilibrium in the discrete preference game.

We remove the assumption in a manner similar to the previous proof, though since the neighbourhood graph is weighted we require fewer auxiliary players. For each existing player , we introduce 2 new players and , called auxiliary players. These players have an edge from player in the neighbourhood graph with weight . Each auxiliary player thus has degree 1. Then auxiliary player has as its preferred strategy with weight , and has as its preferred strategy with weight . Notice that: (1) Since the auxiliary players have degree 1 with an edge of weight incident, while the weight they place on their preferred strategy is , they will always play their preferred strategy. (2) By a simple calculation as in the previous proof, the best response for player is always to play either or . The symmetry of the auxiliary players implies that their presence does not affect the choice of or for player . This completes the proof. ∎

We now give an example for a directed neighbourhood graph where an equilibrium does not exist. As before, we first describe our example under the assumption that we can restrict players to a subset of strategies, and then introduce auxiliary players to remove this assumption.

With the assumption that we can restrict players to a subset of the strategies, the neighbourhood graph and the metric space for our example are shown in Figure 2. There are three players , , and . In the neighbourhood graph, player has an edge to (in this example, we always assume is taken to avoid repetition). The metric space consists of 6 strategies, . We think of the second coordinate as the ‘parity’, and strategies as nodes in a complete bipartite graph with vertices, with any two strategies of the same parity at distance 2, while any two strategies of different parities at distance 1. By our assumption, we restrict player to strategies and .

From the neighbourhood graph, player wants to be near player for . However, in the metric space, for any (restricted) choice of strategy for , the strategy of that is nearest has the opposite parity. Hence each player tries to choose a strategy of the opposite parity from player , and hence there is no equilibrium.

Lastly, to remove the assumption of strategy restrictions, for each player , we add ten new players . In the neighbourhood graph each player has an edge to the ten new players , and the game now has 33 players. For each , the first five new players have as their preferred strategy, while the other five have as their preferred strategy. Since the newly added players only have incoming edges, they always choose their preferred strategy. Then for each player , choosing a strategy from the set gets cost from the auxiliary players, and cost at most from the other two non-auxiliary players. Whereas, any different strategy for player gets cost at least from the auxiliary players. Hence player will always choose from the set at equilibrium. Since the newly added players are symmetric between these two strategies, they do not further affect ’s choice of strategy.

## 4 Algorithms for Computing Equilibria

We now give efficient algorithms for computing equilibria in discrete preference games with restrictions on the metric space. However, we allow a significant generalization of the neighbourhood graph. We allow directed, weighted neighbourhood graphs, where instead of a preferred strategy, players have a penalty associated with each point in the metric space. Formally, for each node in the metric space and each player , there is a real-valued penalty . The cost for player for the strategy profile is

 ci(z)=∑v∈Lpi(v)d(v,zi)+∑j∈Niwijd(zi,zj).

Our results thus show that in the metric spaces discussed below, equilibria exist, even in the case of directed neighbourhood graphs. E.g., this shows that equilibria exist in the case of path metrics.

We discuss metric spaces in more detail now. Any undirected weighted graph on vertices corresponds to a metric space with points, where every vertex is a point, and the distance between any pair of points is the weight of the minimum weight path in the graph between the corresponding vertices. Such a metric space is a graph metric. Further, any finite metric space on points can be represented as a graph metric, by considering the complete graph on vertices where the weight of the edge between any pair of vertices is the distance between them.

We first give an algorithm for when the graph metric is a tree, with positive lengths on the edges. Note that this contains the special case when the graph metric is a path. We then generalise path metrics in another direction, by considering the Cartesian product of path metrics. This product metric intuitively is obtained when the graph for the metric space is a regular grid.

### 4.1 An algorithm for tree metrics

Our algorithm for tree metrics initially places all players at the root. If any player can improve her cost by moving to a child of her current strategy, the algorithm changes her strategy accordingly. In a metric space with points, the algorithm terminates in iterations of the while loop where is the number of points, and hence terminates in polynomial time. We now show that when it terminates, the strategy profile is an equilibrium.

To prove convergence, we first characterise the best response. Fix a player and strategies for the other players. For any node in the tree, let be the weight of ’s neighbours that have , plus ’s penalty for point . This gives us a tree with weights on the nodes. We say that cost of node in the tree is the total weighted distance to the other nodes, i.e., . The set of minimum cost nodes in the weighted tree are called the medians of the tree, and are the best responses for player , since if .

We will use the following result which further characterises the medians. Given weights at the nodes, let be the total weight at the nodes of the tree, and be the graph obtained by removing node .

###### Claim 1 ([11]).

A node is the median of a tree iff the weight of each connected component of is at most .

We also use the following claim.

###### Claim 2.

Given a tree with weights at the nodes, let be an arbitrary vertex and be a median nearest to . Then the cost of the nodes strictly decreases on the path from to .

###### Proof.

Let be the path from to . Note that , since is not a median for . Also, all of these nodes (except , which is the median) are in the same connected component in , and the total weight of nodes in this component is . Now consider any node for . We know that the subtree rooted at has total weight at most (since it is in the same component in ). Since is not a median, the subtree rooted at must have total weight strictly less than . Moving from to increases the distance from every node in this subtree by the length of the edge , and decreases the distance from every other node by this quantity, and hence decreases the cost. ∎

We now prove convergence of the algorithm.

The Tree Metric Algo terminates at an equilibrium.

###### Proof.

Let be the strategy profile when the algorithm terminates. Suppose for a contradiction that for player , , while is a nearest best response(and so a median) with lower cost. In the following, we consider the weighted tree with edge lengths as in the metric space, and weights on the nodes, where for any node in the tree the weight . As earlier, the cost of a node in the tree is the total weighted distance to the other nodes

Let be the path from to , then by Claim 2, the cost strictly increases along this path, and . Since the algorithm terminates, must be ’s parent, hence . Let be the subtree rooted at . Consider the timestep when player moved from to . Note that this decreases ’s distance from every node in by , and increases the distance from every other node by the same length. Since this move decreased ’s cost, at that time, the total weight of ’s neighbours in must have been at least . Since that time step, players have only moved away from the root, and hence in particular any player that was in at that timestep must still be in , and hence when the algorithm terminates, the weight of ’s neighbours in must be at least . However, since is a median, the weight of ’s neighbours in is also at most . Thus must also be a median, giving us a contradiction. ∎

### 4.2 An algorithm for the Cartesian product of path metrics

We now give an algorithm for equilibrium computation if the metric space is the Cartesian product of path metrics. As discussed, a path metric can be represented as a path. Alternatively, a path metric can be embedded in the real number line so that the distance between two points is the absolute difference in their values of their embedding.

A metric space is the Cartesian product of path metrics , , (or a product metric, for brevity) if , and for any two points and in , the distance . Alternatively, is the Cartesian product of path metrics if it can be embedded in , so that the distance between any two points is the distance of their embeddings, and whenever and are points in the embedding, so are the .

For a discrete preference game on a product metric, for each player , her strategy

is a vector, with the

th coordinate denoting her position in the path metric .

For the algorithm, we first characterize equilibria. Given a discrete preference game with product metric and a strategy profile , we say player is playing her partial best response in the th metric if she is at a median in the path metric (we defined the set of medians earlier, for tree metrics). Note that a player may have multiple best responses.

###### Claim 3.

Player is playing her best response iff she is playing her partial best response in each metric .

###### Proof.

The claim is because the distance between two points in the product metric space is the sum of distances along the individual metric spaces. Further, the position in each path metric can be chosen independently. Hence a player minimizes her cost if and only if she minimizes her cost in each component path metric, i.e., she plays a partial best response in each path metric for . ∎

Since the Tree Metric Algo terminates in polynomial time, so does the Product Metric Algo.

The Product Metric Algo terminates at an equilibrium.

###### Proof.

Let be the strategy profile when the algorithm terminates. For each player , the th component is set in the th iteration of the for loop, and after this iteration each player is playing a partial best response in . By Claim 3, after the last iteration of the for loop, the strategy profile is an equilibrium. ∎

### Conclusion

Our work is the first to study the basic question of efficient equilibrium computation in discrete preference games. We show that despite incentivizing coordination, in general equilibrium computation is PLS-hard. However with restrictions on the metric space, equilibrium may be computed in polynomial time, even in very general settings for the neighbourhood graph. Our work is a first step, and leaves open many interesting problems. As an example, for what other metric spaces can we find an equilibrium efficiently? Another interesting direction would be to place restrictions on the neighbourhood graph to better represent real-life social networks, and study if these make equilibrium computation any easier. With the growing popularity of this and other models of opinion formation, we feel these are important, fundamental questions.

### Acknowledgement

We thank Harit Vishwakarma and Rakesh Pimplikar for interesting discussions at the initial stages of this project.

## References

• [1] Daron Acemoglu, Munther A Dahleh, Ilan Lobel, and Asuman Ozdaglar. Bayesian learning in social networks. The Review of Economic Studies, 78(4):1201–1236, 2011.
• [2] Krzysztof R. Apt, Bart de Keijzer, Mona Rahn, Guido Schäfer, and Sunil Simon. Coordination games on graphs. Int. J. Game Theory, 46(3):851–877, 2017.
• [3] Krzysztof R. Apt, Sunil Simon, and Dominik Wojtczak. Coordination games on directed graphs. In Proceedings Fifteenth Conference on Theoretical Aspects of Rationality and Knowledge, TARK 2015, Carnegie Mellon University, Pittsburgh, USA, June 4-6, 2015., pages 67–80, 2015.
• [4] Vincenzo Auletta, Ioannis Caragiannis, Diodato Ferraioli, Clemente Galdi, and Giuseppe Persiano. Generalized discrete preference games. In

Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI 2016, New York, NY, USA, 9-15 July 2016

, pages 53–59, 2016.
• [5] Venkatesh Bala and Sanjeev Goyal. Learning from neighbours. The Review of Economic Studies, 65(3):595–621, 1998.
• [6] Albert-László Barabási and Réka Albert. Emergence of scaling in random networks. Science, 286(5439):509–512, 1999.
• [7] David Bindel, Jon M. Kleinberg, and Sigal Oren. How bad is forming your own opinion? Games and Economic Behavior, 92:248–265, 2015.
• [8] Yuri Boykov, Olga Veksler, and Ramin Zabih. Fast approximate energy minimization via graph cuts. IEEE Trans. Pattern Anal. Mach. Intell., 23(11):1222–1239, 2001.
• [9] Yang Cai and Constantinos Daskalakis. On minmax theorems for multiplayer games. In Proceedings of the Twenty-Second Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2011, San Francisco, California, USA, January 23-25, 2011, pages 217–234, 2011.
• [10] F. Chierichetti, J. Kleinberg, and S. Oren. On discrete preferences and coordination. In Proceedings of the 14th ACM Conference on Electronic Commerce (ACM EC), pages 233–250, 2013.
• [11] Flavio Chierichetti, Jon M. Kleinberg, and Sigal Oren. On discrete preferences and coordination. J. Comput. Syst. Sci., 93:11–29, 2018.
• [12] Peter Clifford and Aidan Sudbury. A model for spatial conflict. Biometrika, 60(3):581–588, 1973.
• [13] Constantinos Daskalakis, Paul W. Goldberg, and Christos H. Papadimitriou. The complexity of computing a nash equilibrium. Commun. ACM, 52(2):89–97, 2009.
• [14] Pedro Domingos and Matt Richardson. Mining the network value of customers. In Proceedings of the Seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD ’01), pages 57–66. ACM, 2001.
• [15] Robert Elsässer and Tobias Tscheuschner. Settling the complexity of local max-cut (almost) completely. In ICALP, 2011.
• [16] Alex Fabrikant, Christos H. Papadimitriou, and Kunal Talwar. The complexity of pure nash equilibria. In Proceedings of the 36th Annual ACM Symposium on Theory of Computing, Chicago, IL, USA, June 13-16, 2004, pages 604–612, 2004.
• [17] Diodato Ferraioli, Paul W. Goldberg, and Carmine Ventre. Decentralized dynamics for finite opinion games. Theor. Comput. Sci., 648:96–115, 2016.
• [18] B. Golub and M.O. Jackson. Naïve learning in social networks: Convergence, influence, and the wisdom of crowds. American Economics Journal: Microeconomics, 2(1):112–149, 2010.
• [19] David S. Johnson, Christos H. Papadimitriou, and Mihalis Yannakakis. How easy is local search? J. Comput. Syst. Sci., 37(1):79–100, 1988.
• [20] David Kempe, Jon M. Kleinberg, and Éva Tardos. Maximizing the spread of influence through a social network. Theory of Computing, 11:105–147, 2015.
• [21] D. Krackhardt. A plunge into networks. Science, 326:47–48, 2009.
• [22] Alejandro A. Schäffer and Mihalis Yannakakis. Simple local search problems that are hard to solve. SIAM J. Comput., 20(1):56–87, 1991.
• [23] Duncan J Watts and Steven H Strogatz. Collective dynamics of ‘small-world’ networks. Nature, 393(6684):440, 1998.
• [24] Mehmet Ercan Yildiz, Asuman E. Ozdaglar, Daron Acemoglu, Amin Saberi, and Anna Scaglione. Binary opinion dynamics with stubborn agents. ACM Trans. Economics and Comput, 1(4), 2013.