On Privatizing Equilibrium Computation in Aggregate Games over Networks

12/13/2019 ∙ by Shripad Gade, et al. ∙ University of Illinois at Urbana-Champaign 0

We propose a distributed algorithm to compute an equilibrium in aggregate games where players communicate over a fixed undirected network. Our algorithm exploits correlated perturbation to obfuscate information shared over the network. We prove that our algorithm does not reveal private information of players to an honest-but-curious adversary who monitors several nodes in the network. In contrast with differential privacy based algorithms, our method does not sacrifice accuracy of equilibrium computation to provide privacy guarantees.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Aggregate games are non-cooperative games in which a player’s payoff or cost depends on her own actions and the sum-total of the actions taken by other players. In a Cournot oligopoly for example, firms compete to supply a product in a market with a price-responsive demand with a goal to maximize profit. A firm’s profit depends on her production cost as well as the market price, where the latter only depends on the aggregate quantity of the product offered in the market by all firms. Aggregate games are widely studied in the literature, e.g., see Novshek (1985); Jensen (2010). Multiple strategic interactions in practice admit an aggregate game model, e.g., Cournot competition models for wholesale electricity markets in Willems et al. (2009); Cai et al. (2019); Cherukuri and Cortés (2019), supply function competition in general economies see Jensen (2010), communication networks in Teng et al. (2019); Koskie and Gajic (2005) and common agency games in Martimort and Stole (2011). Aggregate games are often potential games and a pure-strategy Nash equilibrium can be guaranteed to exist. In this paper, we present an algorithm for networked players to compute such an equilibrium in a distributed fashion that maintains the privacy of players’ cost structures.

Players in a networked game can only communicate with neighboring players in a communication graph. Distributed algorithms for computing Nash equilibrium in networked games have a rich literature, e.g., see Koshal et al. (2016); Salehisadaghiani and Pavel (2018); Ye and Hu (2017); Tatarenko et al. (2018); Parise et al. (2015)

. The obvious difficulty in computing equilibrium strategy arises due to the inability of a player to observe the aggregate decision. Naturally distributed Nash computation proceeds via iterative estimation of the aggregate decision followed by local payoff maximization (or cost minimization) with a given aggregate estimate.

Koshal et al. (2016); Parise et al. (2015) exploits consensus based averaging, Koshal et al. (2016); Salehisadaghiani and Pavel (2018) explore gossip based averaging, and Tatarenko et al. (2018) employs gradient play along with acceleration for aggregate estimation over networks.

1.1 Our Contributions

Algorithms for equilibrium computation were not designed with privacy in mind. We show in Section 2.5, that an honest-but-curious adversary can compromise a few nodes in the network and observe the sequence of estimates to infer other players’ payoff or cost structures for the algorithm in Koshal et al. (2016). In other words, information that allows distributed equilibrium computation can leak players’ sensitive private information to adversaries.

Distributed equilibrium computation algorithms require aggregate estimates to update their own actions. Our proposed algorithm obfuscates local aggregate estimates before sharing them with neighbors. The obfuscation step involves players adding correlated perturbations to each outgoing aggregate estimate. The perturbations are designed such that they add to zero for each player. The received perturbed aggregate estimates are averaged by each player and used for updating strategy using local projected gradient descent.

Our main result (Theorem 1) reveals that obfuscation via correlated perturbations prevents an adversary from accurately learning cost structures provided the network satisfies appropriate connectivity conditions. Players converge to exact Nash equilibrium asymptotically. In other words, we simultaneously achieve both privacy and accuracy in distributed Nash computation in aggregate games. This is in sharp contrast to differentially private algorithms where trade-offs between accuracy and privacy guarantee are fundamental, e.g., see Han et al. (2016).

Simulations in Section 4 validate our results and corroborate our intuition that obfuscation slows down but does not impede the convergence of the algorithm.

2 Equilibrium Computation in Aggregate Games and the Lack of Privacy

We begin by introducing a networked aggregate game. We then present an adversary model and show that prior distributed equilibrium computation algorithms leak private information of players. This exposition motivates the development of privacy-preserving algorithms for equilibrium computation in the next section.

2.1 The Networked Aggregate Game Model

Consider a game with players that can communicate over a fixed undirected network with reliable lossless links. Model this communication network by graph , where each node in denotes a player. Two players and can communicate with each other if and only if they share an edge in , denoted as . Call the set of neighbors of node and by definition.

Player can take actions in a convex compact set , where denotes the set of real numbers. Define as the Minkowski (set) sum of ’s and

as the aggregate action of all players. For convenience, define . We assume that is non-empty. For an action profile , player incurs a cost that takes the form . This defines an aggregate game in that the actions of other players affect player only through the sum of actions of all players, .

Each player thus seeks to solve

(1)

For each , assume that is continuously differentiable in over a domain that contains . Furthermore, for each , let be convex over and the gradient be uniformly -Lipschitz, i.e., such that,

(2)

for all in , in . Throughout, stands for the -norm of its argument. Define and the gradient map

(3)

for . Assume throughout that is strictly monotone over , i.e.,

(4)

for all and . Denote this game in the sequel by .

To provide a concrete example, consider the well-studied Nash-Cournot game (see Fudenberg and Tirole (1991)) among suppliers competing to offer into a market for a single commodity where the price varies with demand as . Supplier offers to produce amount of goods within its production capability modeled as . Here denotes the set of nonnegative real numbers. To produce , supplier incurs a cost of , where is increasing, convex and differentiable. Each supplier seeks to maximize her profit, or equivalently, minimize her loss. The loss of supplier is

2.2 Equilibrium Definition and Existence

An action profile defines a Nash equilibrium of in pure strategies, if

for all and .

The networked aggregate game, as described above, always admits a unique pure strategy Nash equilibrium. See Theorem 2.2.3 in Facchinei and Pang (2007) for details. Given that an equilibrium always exists, prior literature has studied distributed algorithms for players to compute such an equilibrium.

2.3 Prior Algorithms for Distributed Nash Computation

We now describe the distributed algorithm in Koshal et al. (2016) for equilibrium computation of . In Section 2.5, we demonstrate that adversarial players can infer private information about cost structures ’s from observing a subset of the variables during equilibrium computation using that algorithm. While we only study the algorithm in Koshal et al. (2016), our analysis can be extended to those presented in Salehisadaghiani and Pavel (2018); Ye and Hu (2017); Tatarenko et al. (2018); Parise et al. (2015).

Recall that players in do not have access to the aggregate decision. To allow equilibrium computation, let players at iteration maintain estimates of the aggregate decision as , initialized as, for each player . At discrete time steps , each player transmits her own estimate of the aggregate decision to its neighbors and updates her own action as,

(5a)
(5b)
(5c)

Here, stands for projection on , and is a common learning rate of all players.

The algorithm has three steps. First, player computes a weighted average of the estimates of the aggregate received from its neighbors in (5a), where is a symmetric doubly-stochastic weighting matrix. The sparsity pattern of the matrix follows that of graph , i.e.,

Second, player performs a projected gradient update in (5b) utilizing the weighted average of local aggregate decision in lieu of the true aggregate decision . Finally, she updates her own estimate of aggregate average in (5c) based on her local decision and its update .

2.4 Adversary Model and Privacy Definition

Consider an adversary that compromises the players in . is equipped with unbounded storage and computational capabilities, and has access to all information stored, processed locally and communicated to any compromised players at all times. We define adversary model using the information available to .

  1. For a compromised node , knows all local information , , , and information received from neighbors of i.e., for at each .

  2. knows the algorithm for equilibrium computation and its parameters and .

  3. observes aggregate decision at each .

What does seek to infer? The dependency of a player’s cost on her own actions encodes private information. In the Cournot competition example, this dependency is precisely supplier ’s production cost – information that is business sensitive. seeks to exploit information sequence observed from compromised players to infer private information of other players. Intuitively, privacy implies inability of to infer private cost functions.

Denote the set of non-adversarial nodes by . Call the restriction of to obtained by deleting the adversarial nodes. See Figure 1 for an illustration. For this example, monitors all variables and parameters pertaining to player 5, but seeks to infer the functions .

Figure 1: Illustration of and . Here, and .

Let denote the set of all permutations over all non-adversarial nodes in . Define the collection of games

Thus, comprises the games where the cost functions and strategy sets of non-adversarial players are permuted. All games in have the same aggregate strategy at Nash equilibrium. Next, we utilize to define privacy.

Definition 1 (Privacy)

Consider a distributed algorithm to compute the Nash equilibrium of . If execution observed by adversary is consistent with all games in , then the algorithm is private.

We define privacy as the inability of to distinguish between games in . Even if knew all possible costs exactly–which is a tall order–our privacy definition implies that cannot associate such costs to specific players.

2.5 Privacy Breach in Algorithm (5)

Consider a Cournot competition among 5 players connected according to in Figure 1, where has compromised player . Assume that the equilibrium of the game lies in the interior of each player’s strategy set. Recall that stores observed information at each and processes it to infer private cost information . We argue how can compute cost functions up to a constant.

We first show privacy breach for player 4. observes at each . uses , , and to compute using (5a). Moreover, uses (5c) to compute,

For large enough , the step-size is small enough to ensure,

At such large , uses (5b) along with and to calculate .

uses information about strucutre of loss function i.e.

, along with , , and game parameters to learn . Several observations of allows to learn the private cost upto a constant.

We showed that privacy breach for player 4, the same analysis can be used for players 1, 2 and 3 with an additional step. observes , which tracks (Lemma 2 in Koshal et al. (2016)). computes

Since is available for each , uses same process as above to show privacy breach for players 1, 2 and 3.

For algorithm (5), uncovers all private cost functions for an example aggregate game. Next, we design an algorithm that protects privacy of players’ private information in the sense of Definition 1 against .

3 Our Algorithm and Its Properties

We propose and analyze Algorithm 1 that computes Nash equilibrium of in a distributed fashion. The main result (Theorem 1) shows that the algorithm asymptotically converges to the equilibrium. Attempts by to recover each player’s cost structure, however, remain unsuccessful.

The key idea behind our design is the injection of correlated noise perturbations in the exchange of local estimates of the aggregate decision. Different neighbors of player receive different estimates of the aggregate decision. The perturbations added by any player add to zero. While may still infer the true aggregate decision, the protocol does not allow him to correctly infer the players’ iterates or the gradients of their costs with respect to their own actions. Our assumption on network connectivity requires be connected and not be bipartite. Under these conditions cannot monitor all outgoing communication channels from any player. We further show that one can design noises in a way that ’s observations are consistent with all games in , making it impossible for him to uncover cost for any specific player.

Throughout, assume that is a doubly stochastic that follows the sparsity pattern of . Further, assume that all non-diagonal, non-zero entries of are identically .

At each time , player generates correlated random numbers satisfying and . Player then adds to to generate , the estimate sent by player to player , according to (8). Let denote the collection of ’s for all players across time. Call the obfuscation sequence.

Each node computes weighted average of received aggregate estimates to construct its own estimate aggregate decision , following (9). Players perform projected gradient descent using local decision estimate , gradient of cost function , and non-summable, square-summable step size (see (6)) to arrive at an improved local decision estimate using (10). Players then update their local aggregate estimate using the change in local decision estimate per (11).

1: Input: Player knows , , and . Consider a non-increasing non-negative sequence that satisfies
(6)
2:Initialize: For , .
3:
4: For , players execute in parallel:
5:
6:Construct random numbers , satisfying
(7)
7:Send obfuscated aggregate estimates to , where
(8)
8:Compute weighted average of received estimates as
(9)
9:Perform a projected gradient descent step as
(10)
10:Update local aggregate estimate as
(11)
Algorithm 1 Private Distributed Nash Computation

The properties of our algorithm are summarized in the next result. The proof is included in Section 5.

Theorem 1

Consider a networked aggregate game defined as . If is connected and not bipartite, then Algorithm 1 is private. Moreover, if the obfuscation sequence is bounded, then Algorithm 1 asymptotically converges to a Nash equilibrium of the game.

The convergence properties largely mimic that of distributed descent algorithms for equilibrium computation. The locally balanced and bounded nature of the designed noise together with decaying step-sizes ultimately drown the effect of the noise. Computing balanced yet bounded perturbations can be achieved using secure multiparty computation protocols described in Gade and Vaidya (2016, 2018b); Abbe et al. (2012). Our assumption on is such that given two games from and an obfuscation sequence , we are able to design a different obfuscation sequence , such that the execution of perturbed with generates identical observables as perturbed with . The connectivity among non-adversarial players in is key to the success of our algorithm design. Convergence speed depends on the size of the perturbations. We investigate this link experimentally in Section 4, but leave analytical characterization of this relationship for future work. In what follows, we compare our algorithm and its properties to other protocols for privacy preservation.

3.0.1 Comparison with Differentially Private Algorithms:

Differentially private algorithms for computing Nash equilibrium of potential games have been studied in Dong et al. (2015); Cummings et al. (2015). The algorithm in Dong et al. (2015) executes a differentially private distributed mirror-descent algorithm to optimize the potential function. Experiments reveal that a trade-off arises between accuracy and privacy parameters, i.e., the more privacy one seeks, the less accurate the final output of the algorithm becomes. Such a tradeoff is a hallmark of differentially private algorithms, e.g., see Han et al. (2016). Our algorithm on the other hand does not suffer from that limitation. Notice that our definition of privacy is binary in nature. That is, an algorithm for equilibrium computation can either be private or non-private. We aim to explore properties of our algorithmic architecture with notions of privacy that allow for a degree of privacy and compare them with differentially private algorithms.

3.0.2 Comparison to Cryptographic Methods:

Authors in Lu and Zhu (2015) use secure multiparty computation to compute Nash equilibrium. Such an approach guarantees privacy in an information theoretic sense. This protocol provides privacy guarantees along with accuracy, similar to our algorithmic framework. However, cryptographic protocols are typically computationally expensive for large problems (see Section V in Zhang et al. (2019)), and are often difficult to implement in distributed settings.

3.0.3 Comparison to Private Distributed Optimization:

Our earlier work in Gade and Vaidya (2018b) has motivated the design of Algorithm 1. While our prior work seeks privacy-preserving distributed protocols to cooperatively solve optimization problems, the current paper focuses on non-cooperative games. Protocols in Gade and Vaidya (2018b) advocate use of perturbations that cancel over the network. Such a design is not appropriate for networked games for two reasons. First, players must agree on noise design, a premise that requires cooperation. Second, perturbing local functions ’s, even if the changes cancel in aggregate, can alter the equilibrium of the game.

3.0.4 Privacy in Client-Server architecture:

This work considers players communicating over a peer-to-peer network. However, engineered distributed systems often have a client-server architecture. Presence of a central server entity allows for easy aggregate computation. However, privacy is sacrificed if the parameter server is adversarial. We have investigated privacy preservation for distribution optimization in this architecture in Gade and Vaidya (2018a), where, we use multiple central servers instead of one, a subset of which can be adversarial. We believe our algorithm design and analysis in Gade and Vaidya (2018a) can be extended to deal with private equilibrium computation for aggregate games in client-server framework.

4 A Numerical Experiment

Consider a Cournot competition with players over described in Figure 3. Player ’s cost is given by

The cost coefficients are drawn randomly from

for each . The strategy sets are identically for each . Choose that parameterizes the matrix . Let the price vary with demand as

Figure 2: Communication network for Cournot network example on players.
Figure 3: Iterates generated by Algorithm 1 versus the Algorithm in (5) for .
Figure 2: Communication network for Cournot network example on players.

We initialize the algorithm with identically for all players. We use secure multi-party computing technique in Gade and Vaidya (2018b) to design obfuscation sequence that satisfies (7) and

The trajectory of the average distance of ’s from across players with is shown in Figure 3.

Our algorithm converges to the equilibrium similar to the non-private algorithm in (5). However, its convergence is slower as seen in Figure 3. The slowdown is especially pronounced for large ’s and is an artifact of perturbations added by players to obfuscate information from the adversary. Thus, our algorithm design achieves privacy and asymptotic convergence to equilibrium, but sacrifices speed of convergence. An analytical characterization of the slowdown defines an interesting direction for future work.

5 Proof of Theorem 1

5.1 Proving Algorithm 1 is Private

Recall that is the graph over non-adversarial nodes . Suppose has nodes. Let be two players in and

be two games in such that is identical to , except that costs and strategy sets of players and are switched:

For convenience, define as the permutation that encodes the switch, i.e.,

Consider the execution of Algorithm 1 on , given by

with obfuscation sequence used in (8), initialized with . We prove that there exists an obfuscation sequence such that execution of Algorithm 1 on with starting from , is identical to , from ’s perspective. An arbitrary permutation over is equivalent to a composition of a sequence of switches among two players in . As a result, the algorithm execution on games in can be made to appear identical from ’s standpoint, proving the privacy of Algorithm 1.

In the rest of the proof, we show how to construct that ensures and appear identical to .

Adversary observes for all at each . Consequently, perturbations utilized by corrupted nodes are same in both executions,

(12)

Moreover, observes for all and hence, all messages received by from , denoted by , are identical for both executions, i.e.,

(13)

Adversary observes for each . Enforcing

results in . We have

(14)

The obfuscation used by each player is locally balanced, and hence, we have

(15)

Let

be a vector of

’s for . In the sequel, let denote a vector of ones of appropriate dimension. For graph , define its oriented incidence matrix , adjacency matrix , degree matrix , and the normalized graph Laplacian matrix as

Using the notation and for a scalar , define and as the matrices obtained from , applying the respective operator componentwise. Then, (14) - (15) can be written as

We prove that,

(16)

to show that admits at least one solution.

Notice that

(17)

proving that rows of are not linearly independent. Next, we show that . To that end, we have

(18)

where is the identity matrix. Here, (a) follows from the definition of , (b) follows from Sylvester’s nullity theorem (see Horn and Johnson (2012)) and the fact that the rank of graph Laplapcian for the connected graph on nodes is . Furthermore, since

is not bipartite, the eigenvalues of

are strictly less than 2, according to Lemma 1.7 in Chung and Graham (1997). Therefore, we have that implies (c). Thus, (17) and (18) together yield .

For the augmented matrix , we have

(19)

In the above relation, the inequality on the left follows from our earlier proof that . The one on the right follows from the fact that the augmented matrix has rows. We demonstrate that rows of are linearly dependent to conclude (16). From (17), we deduce

Now, we show to conclude the proof. In the following, computes the cardinality of a set .

where we have used and for . Utilizing , for all , simplify as