Fully distributed Nash equilibrium seeking over time-varying communication networks with linear convergence rate

03/22/2020 ∙ by Mattia Bianchi, et al. ∙ Delft University of Technology 0

We design a distributed algorithm for learning Nash equilibria over time-varying communication networks in a partial-decision information scenario, where each agent can access its own cost function and local feasible set, but can only observe the actions of some neighbors. Our algorithm is based on projected pseudo-gradient dynamics, augmented with consensual terms. Under strong monotonicity and Lipschitz continuity of the game mapping, we provide a very simple proof of linear convergence, based on a contractivity property of the iterates. Compared to similar solutions proposed in literature, we also allow for a time-varying communication and derive tighter bounds on the step sizes that ensure convergence. In fact, our numerical simulations show that our algorithm outperforms the existing gradient-based methods. Finally, to relax the assumptions on the network structure, we propose a different pseudo-gradient algorithm, which is guaranteed to converge on time-varying balanced directed graphs.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

NE problems arise in several network systems, where multiple selfish decision-makers, or agents, aim at optimizing their individual, yet inter-dependent, objective functions. Engineering applications include communication networks [1], demand-side management in the smart grid [2], charging of electric vehicles [3] and demand response in competitive markets [4]. From a game-theoretic perspective, the challenge is to assign the agents behavioral rules that eventually ensure the attainment of a NE, a joint action from which no agent has an incentive to unilaterally deviate.
Literature review: Typically, NE seeking algorithms are designed under the assumption that each agent can access the decisions of all the competitors [5], [6], [7]. Such an hypothesis, referred as full-decision information, requires the presence of a coordinator, that broadcast the data to the network, and it is impractical for some applications [8], [9]. One example is the Nash-Cournot competition model described in [10], where the profit of each of a group of firms depends not only on its own production, but also on the whole amount of sales, a quantity not directly accessible by any of the firms. Therefore, in recent years, there has been an increased attention for fully distributed algorithms that allow to compute NE relying on local information only. One solution is offered by pay-off based schemes [11], [12], where the agents are not required to communicate between each other, but shall be able to measure their own cost functions. Instead, in this paper, we are interested in a different, model-based approach. Specifically, we consider the so-called partial-decision information

scenario, where the agents agree on sharing their strategies with some neighbors on a network; based on the knowledge exchange, they can estimate and eventually reconstruct the actions of all the competitors. This setup has only been introduced very recently. In particular, most of the results available resort to (projected) gradient and consensus dynamics, both in continuous time

[13], [14], and discrete time. For the discrete time case, fixed-steps algorithms were proposed in [15], [16], [17] (the latter for generalized games), all exploiting a certain restricted monotonicity property. Alternatively, the authors of [18] developed a gradient-play scheme by leveraging contractivity properties of doubly stochastic matrices. Nevertheless, in all these approaches theoretical guarantees are provided only for step sizes that are typically very small, affecting the speed of convergence. Furthermore, all the methods cited are designed for the case of a time-invariant, undirected network. To the best of our knowledge, switching communication topologies have only been addressed with diminishing step sizes. For instance, the early work [10] considered aggregative games over time-varying undirected graphs. This result was extended by the authors of [19] to games with affine coupling constraints, based on dynamic tracking and on the forward-backward splitting [20, §26.5]. In [21], an asynchronous gossip algorithm was presented to seek a NE over directed graphs. The main drawback is that vanishing step sizes typically result in slow convergence.
Contribution: Motivated by the above, in this paper we present the first fixed-step NE seeking algorithms for strongly monotone games over time-varying communication networks. Our novel contributions are summarized as follows:

  • [leftmargin=*,topsep=0pt]

  • We propose a simple, fully distributed, projected gradient-play algorithm, that is guaranteed to converge with linear rate when the network adjacency matrix is doubly stochastic. With respect to the formulation in [18], we consider a time-varying communication topology and we allow for constrained action sets. Moreover, differently from [18], we provide an upper bound on the step size that is independent of the number of agents (§III);

  • We show via numerical simulations that, even in the case of fixed networks, our algorithm outperforms the existing pseudo-gradient based dynamics, when the step sizes are set to their theoretical upper bounds (§V);

  • We prove that linear convergence to a NE on time varying weight-balanced directed graphs can be achieved via a forward-backward algorithm [22, §12.7.2], which has already been studied in [17], [16], but only for the case of fixed undirected networks (§IV).

Basic notation: is the set of natural numbers, including . () denotes the set of (nonnegative) real numbers. (

) is the vector of dimension

with all elements equal to ();

denotes the identity matrix of dimension

; the subscripts might be omitted when there is no ambiguity. For a matrix , its transpose is , denotes the element on the row and column . stands for symmetric positive definite matrix.

are the singular values of

; if is symmetric,

denote its eigenvalues.

denotes the Kronecker product. denotes the block diagonal matrix with on its diagonal. Given vectors , and . denotes the Euclidean vector norm. For a differentiable function , denotes its gradient. denotes the Euclidean projection onto a closed convex set . An operator is (-strongly) monotone if , for all .

Ii Mathematical setup

We consider a set of agents , where each agent shall choose its strategy (i.e., decision variable) from its local decision set . Let denote the stacked vector of all the agents’ decisions, the overall action space and . The goal of each agent is to minimize its objective function , which depends on both the local variable and on the decision variables of the other agents . The game is then represented by the inter-dependent optimization problems:


The technical problem we consider in this paper is the computation of a NE, as defined next.

Definition 1

A Nash equilibrium is a set of strategies such that, for all :

The following regularity assumptions are common for NE problems, see, e.g., [17, Ass. 1], [16, Ass. 1].

Standing Assumption 1 (Regularity and convexity)

For each , the set is non-empty, closed and convex; is continuous and the function is convex and continuously differentiable for every .

Under Standing Assumption 1, a joint strategy is a NE of the game in (1) if and only if it solves the variational inequality VI111For an operator and a set , the variational inequality VI is the problem of finding a vector such that , for all [22, Def. 1.1.1]. [22, Prop. 1.4.2], or, equivalently, if and only if, for any [22, Prop. 1.5.8],


where is the pseudo-gradient mapping of the game:


A sufficient condition for the existence of a unique NE is the strong monotonicity of the pseudo-gradient [22, Th. 2.3.3], as postulated next. This assumption is always used for (G)NE seeking under partial-decision information with fixed step sizes, e.g., in [16, Ass. 2], [17, Ass. 3] (while it is sometimes replaced by strict monotonicity and compactness of when allowing for vanishing step sizes [10, Ass. 2]). It implies strong convexity of the functions for every , but not necessarily (strong) convexity of in the full argument.

Standing Assumption 2

The pseudo-gradient mapping in (3) is -strongly monotone and -Lipschitz continuous, for some , : for any , and .

In our setup, each agent can only access its own cost function and feasible set . Moreover, agent does not have full knowledge of , and only relies on the information exchanged locally with neighbors over a time-varying directed communication network

. The ordered pair

belongs to the set of edges, , if and only if agent can receive information from agent at time . We denote the weighted adjacency matrix of , with if , otherwise; and the in-degree and Laplacian matrices of , with ; the set of in-neighbors of agent .

Standing Assumption 3

For each , the graph is strongly connected.

Assumption 1

For all , the following hold:

  • Self-loops: for all ;

  • Double stochasticity: , .

Remark 1

Assumption 1(i) is intended just to ease the notation. Instead, Assumption 1(ii) is stronger. It is typically used for networked problems on undirected symmetric graphs, e.g., in [10, Ass. 6], [19, Ass. 3], [18, Ass. 3], justified by the fact that it can be satisfied by assigning the following Metropolis weights to the communication:

In practice, in the case of symmetric communication, to satisfy Assumption 1(ii), even in the case of time-varying topology, it suffices for the agents to exchange their in-degree with their neighbors at every time step. Therefore, Standing Assumption 3 and Assumption 1 can be easily fulfilled for undirected graphs that are connected at each time step. For directed graphs, given any strongly connected topology, weights can be assigned such that the resulting adjacency matrix (with self-loops) is doubly stochastic, via an iterative distributed process [23]. However this can be impractical, especially if the network is time-varying.

Under Assumption 1, it holds that , for all , where denotes the second largest singular value of . Moreover, for any ,


where is the average of . We will further assume that is bounded away from 1; this automatically holds if the networks are chosen among a finite family.

Assumption 2

There exists such that , for all .

Iii Distributed Nash equilibrium seeking

In this section, we present a pseudo-gradient algorithm to seek a NE of the game (1) in a fully distributed way. To cope with partial-decision information, each agent keeps an estimate of all other agents’ actions. Let , where and is agent ’s estimate of agent ’s action, for all ; also, . The agents aim at asymptotically reconstructing the true value of the opponents’ actions, based on the data received by their neighbors. The procedure is summarized in Algorithm 1. Each agents update its estimates according to consensus dynamics, then its strategy via a gradient step. We remark that each agents computes the partial gradient of its cost in its local estimates , not on the actual joint strategy .

To write the algorithm in compact form, let ; as in [17, Eq.13-14], let, for all ,


where , ; and . In simple terms, selects the -th dimensional component from an -dimensional vector. Thus, , and . We define the extended pseudo-gradient mapping as


Therefore, Algorithm 1 reads in compact form as:


where and .

Lemma 1 ([24, Lemma 3])

The mapping in (6) is -Lipschitz continuous, for some .

We are now ready to prove the main result of this section.

Theorem 1

Let Assumptions  1-2 hold and let


If the step size is chosen such that


then, for any initial condition, Algorithm 1 converges to the point , where is the unique NE of the game in (1), with linear rate: for all ,

See Appendix -A.

Initialization: for all , set , .
Iterate until convergence: for all ,

  • [leftmargin=0.3em]

  • Distributed averaging:

  • Local variables update:

Algorithm 1 Fully distributed NE seeking
Remark 2

The parameter can always be chosen such that the condition in (9) applies. In fact, it suffices to set that satisfies the following inequalities:


The condition in (10a) implies that (by diagonal dominance and positivity of the diagonal elements). The inequalities in (10b)-(10c) are the Sylvester’s criterion for the matrix : they impose that all the principals minors of are positive, hence . Altogether, this implies . We remark that the monomial inequality in (10c) always holds for some sufficiently small, since the constant term is . While explicit solutions are known for cubic equations, we prefer the compact representation in (10c). The bounds in (10) are not tight, and in practice, better bounds on the step size can be obtained by simply checking the Euclidean norm of the matrix . Instead, the key observation is that the conditions in (10) do not depend on the number of agents: given the parameters , , and , a constant that ensure convergence can be found independently of .

Iii-a Technical discussion

In Algorithm 1, the partial gradients are evaluated on the local estimates , not on the actual strategies . Only when the estimates of all the agents coincide with the actual value, i.e., , we have that . As a consequence, the mapping is not necessarily monotone, not even under strong monotonicity of the game mapping in Standing Assumption 2. Indeed, the loss of monotonicity is the main technical difficulty arising from the partial-decision information scenario. Some works [14], [15], [16], [17], [24] deal with this issue by leveraging a restricted strong monotonicity property, which can be ensured, by opportunely choosing the parameter , for the augmented mapping , where and is the Laplacian of a fixed undirected connected network. Since the unique solution of the VI is , with the unique NE of the game in (1) [16, Prop.  1], one can design NE seeking algorithms via standard solution methods for variational inequalities (or the corresponding monotone inclusions, [17]). For instance, in [16], a forward-backward algorithm [22, 12.4.2] is proposed to solve VI, resulting in the algorithm


We also recover this iteration when considering [17, Alg. 1] in the absence of coupling constraints. The drawback is that exploiting the monotonicity of results in conservative theoretical upper bounds on the parameters and , and consequently in slow convergence (see §IV-V). More recently, the authors of [18] studied the convergence of (11) based on contractivity properties of the iterates, in the case of a fixed undirected network with doubly stochastic adjacency matrix , unconstrained action sets (i.e., ), and by fixing , which results in the algorithm:


While this algorithm requires only one parameter to be set, the upper bound on provided in [18, Th. 1] is decreasing to zero when the number of agents grows unbounded (in contrast with the one in our Theorem 1, see Remark 2).

Iv Balanced directed graphs

In this section, we relax Assumption 1 to the following.

Assumption 3

For all , the communication graph is weight balanced: .

For weight-balanced digraphs, in-degree and out-degree of each node coincide. Therefore, the matrix is itself the symmetric Laplacian of an undirected graph. Besides, such a graph is connected by Standing Assumption 3; hence has a simple eigenvalue in , and the others are positive, i.e., .

Assumption 4

There exist , such that and , for all .

Remark 3

Like Assumption 2, Assumption 4 always holds if the communication networks switches among a finite family. However, , and are global parameters, that could be difficult to compute in a distributed way; upper/lower bounds might be available for special classes of networks, e.g., unweighted graphs.

To seek a NE over switching balanced digraphs, we propose the iteration in Algorithm 2. In compact form, it reads as


where . Clearly, (13) is the same scheme of (11), just adapted to take the switching topology into account. In fact, the proof of convergence of Algorithm 2 is based on a restricted strong monotonicity property of the operator


that still holds for balanced directed graphs, as we show next.

Theorem 2

Let Assumptions 3-4 hold, and let


If , then and, for any , for any initial condition, Algorithm 2 converges to the point , where is the unique NE of the game in (1), with linear rate: for all ,

See Appendix -B.

Initialization: for all , set , .
Iterate until convergence: for all ,

Algorithm 2 Fully distributed NE seeking

V Numerical example: A Nash-Cournot game

We consider the Nash-Cournot game in [17, §6]. firms produces a commodity that is sold to markets. Each firm is only allowed to participate in of the markets. The decision variables of each firm are the quantities of commodity to be delivered to these markets, bounded by the local constraints . Let , where is the matrix that expresses which markets firm participates in. Specifically, the -th column of has its -th element equal to if is the amount of product sent to the -th market by agent , for all ; all the other elements are . Therefore, is the vector of the quantities of total product delivered to each market. Firm aims at maximizing its profit, i.e., minimizing the cost function Here, is firm ’s production cost, with , , . Instead, associates to each market a price that depends on the amount of product delivered to that market. Specifically, the price for the market , for , is -, where , . We set , . The market structure is defined as in [17, Fig. 1], that defines which firms are allowed to participate in which market. Therefore, and

. We select randomly with uniform distribution

in , diagonal with diagonal elements in , in , in , in , in , for all , . The resulting setup satisfies Standing Assumptions 1-2 [17, §VI]. The firms cannot access the production of all the competitors, but can communicate with some neighbors on a network.


Fig. 1: Distance from the NE for different pseudo-gradient NE seeking methods, with step sizes that guarantee convergence.
Fig. 2: Performance of Algorithm 1, with step size set to satisfy Theorem 1, and the method in [18, Alg. 1], with step size as the upper bound in [18, Th. 1]. Algorithm 1 converges much faster, thanks to the better bound on the step size. The scheme in [18, Alg. 1] still converges, if we set the step size (dashed line).
Fig. 3: Comparison of Algorithms 1 and 2, on a time-varying graph.
Fig. 4: Distance from the NE for Algorithm 2, on a time-varying digraph. Since the networks are sparse, Theorem 2 ensures convergence only for small step sizes (, ), and convergence is slow (solid line). However, the bounds are conservative: the iteration still converges with times larger than the theoretical value (dashed line).

We first consider the case of a fixed, undirected graph, under Assumption 1. Algorithm 2 in this case reduces to [16, Alg. 1] or, in the absence of coupling constraints, to [17, Alg. 1]. We compare Algorithms 1-2 with the inexact ADMM in [15] and the accelerated gradient method in [16], for the step sizes that ensure convergence. Specifically, we set as in Theorem 1 for Algorithm 1. The convergence of all the other Algorithms is based on the monotonicity of in (14); hence we set as in Theorem 2. Instead of using the conservative bounds in (15) for the strong monotonicity and Lipschitz constants of , and , we obtain a better result by computing the exact values numerically. is (non-restricted) strongly monotone for our parameters, hence also the convergence result for [16, Alg. 2 ] holds. Figure 1 shows that Algorithm 1 outperforms all the other methods (we also remark that the accelerated gradient in [16, Alg. 2] requires two projections and two communications per iterations). As a numerical example, we also compare Algorithm 1 with the scheme in (12) by removing the local constraints, in Figure 2.
For the case of doubly stochastic time-varying networks, we randomly generate connected graphs and for each iteration we pick one with uniform distribution. In Figure 3, we compare the performance of Algorithms 1-2, for step sizes set to their best theoretical values as in Theorems 1-2.
Finally, in Figure 4, we test Algorithm 2 when the communication topology is switching between two balanced directed graphs: the unweighted directed ring, where each agent can send information to the agent (with the convention ), and a second graph, where agent is also allowed to transmit to agent , for all .

Vi Conclusion

Nash equilibrium problems on time-varying graphs can be solved with linear rate via fixed-step pseudo-gradient algorithms, if the network is connected at every iteration and the game mapping is Lipschitz continuous and strongly monotone. Our algorithm proved much faster than the existing gradient-based methods, even in the case of a fixed communication topology, at least in our numerical experience. The extension of our results to games with coupling constraints is left as future research. It would be also valuable to relax the connectivity conditions.

-a Proof of Theorem 1

We define the estimate consensus subspace and its orthogonal complement . Thus, any vector can be written as , where , , and . Also, we use the shorthand notation and in place of and . We recast the iteration in (7) as


Let be the unique NE of the game in (1), and . We recall that by (2), and then . Moreover, ; hence is a fixed point for (16). Let and . Thus, it holds that


where the first inequality follows by nonexpansiveness of the projection ([20, Prop. 4.16]), and to bound the addends in (17) we used, in the order:

  • [leftmargin=*]

  • 3rd term: , Lipschitz continuity of , and ;

  • 4th term: ;

  • 5th term: ;

  • 6th term: Lipschitz continuity of ;

  • 7th term: as above.

Besides, for every and for all , it holds that , where , by doubly stochasticity of , and by (4) and properties of the Kronecker product. Therefore we can finally write, for all , for all ,


where the symmetric matrix is as in (8).

-B Proof of Theorem 2

Let be the unique NE of the game in (1), and . We recall that the null space by Standing Assumption 3 and property of the Kronecker product. Therefore, and is a fixed point of the iteration in (13) by (2). With as in (14), for all , for any , it holds that: