I Introduction
NE problems arise in several network systems, where multiple selfish decisionmakers, or agents, aim at optimizing their individual, yet interdependent, objective functions. Engineering applications include communication networks [1], demandside management in the smart grid [2], charging of electric vehicles [3] and demand response in competitive markets [4].
From a gametheoretic perspective, the challenge is to assign the agents behavioral rules that eventually ensure the attainment of a NE, a joint action from which no agent has an incentive to unilaterally deviate.
Literature review:
Typically, NE seeking algorithms are designed under the assumption that each agent can access the decisions of all the competitors [5], [6], [7]. Such an hypothesis, referred as fulldecision information, requires the presence of a coordinator, that broadcast the data to the network, and it is impractical for some applications [8], [9].
One example is the NashCournot competition model described in [10], where the profit of each of a group of firms depends not only on its own production, but also on the whole amount of sales, a quantity not directly accessible by any of the firms.
Therefore, in recent years, there has been an increased attention for fully distributed algorithms that allow to compute NE relying on local information only.
One solution is offered by
payoff based schemes [11], [12], where the agents are not required to communicate between each other, but shall be able to measure their own cost functions.
Instead, in this paper, we are interested in a different, modelbased approach. Specifically, we consider the socalled partialdecision information
scenario, where the agents agree on sharing their strategies with some neighbors on a network; based on the knowledge exchange, they can estimate and eventually reconstruct the actions of all the competitors. This setup has only been introduced very recently. In particular, most of the results available resort to (projected) gradient and consensus dynamics, both in continuous time
[13], [14], and discrete time. For the discrete time case, fixedsteps algorithms were proposed in [15], [16], [17] (the latter for generalized games), all exploiting a certain restricted monotonicity property. Alternatively, the authors of [18] developed a gradientplay scheme by leveraging contractivity properties of doubly stochastic matrices. Nevertheless, in all these approaches theoretical guarantees are provided only for step sizes that are typically very small, affecting the speed of convergence. Furthermore, all the methods cited are designed for the case of a timeinvariant, undirected network. To the best of our knowledge, switching communication topologies have only been addressed with diminishing step sizes. For instance, the early work [10] considered aggregative games over timevarying undirected graphs. This result was extended by the authors of [19] to games with affine coupling constraints, based on dynamic tracking and on the forwardbackward splitting [20, §26.5]. In [21], an asynchronous gossip algorithm was presented to seek a NE over directed graphs. The main drawback is that vanishing step sizes typically result in slow convergence.Contribution: Motivated by the above, in this paper we present the first fixedstep NE seeking algorithms for strongly monotone games over timevarying communication networks. Our novel contributions are summarized as follows:

[leftmargin=*,topsep=0pt]

We propose a simple, fully distributed, projected gradientplay algorithm, that is guaranteed to converge with linear rate when the network adjacency matrix is doubly stochastic. With respect to the formulation in [18], we consider a timevarying communication topology and we allow for constrained action sets. Moreover, differently from [18], we provide an upper bound on the step size that is independent of the number of agents (§III);

We show via numerical simulations that, even in the case of fixed networks, our algorithm outperforms the existing pseudogradient based dynamics, when the step sizes are set to their theoretical upper bounds (§V);
Basic notation: is the set of natural numbers, including . () denotes the set of (nonnegative) real numbers. (
) is the vector of dimension
with all elements equal to ();denotes the identity matrix of dimension
; the subscripts might be omitted when there is no ambiguity. For a matrix , its transpose is , denotes the element on the row and column . stands for symmetric positive definite matrix.are the singular values of
; if is symmetric,denote its eigenvalues.
denotes the Kronecker product. denotes the block diagonal matrix with on its diagonal. Given vectors , and . denotes the Euclidean vector norm. For a differentiable function , denotes its gradient. denotes the Euclidean projection onto a closed convex set . An operator is (strongly) monotone if , for all .Ii Mathematical setup
We consider a set of agents , where each agent shall choose its strategy (i.e., decision variable) from its local decision set . Let denote the stacked vector of all the agents’ decisions, the overall action space and . The goal of each agent is to minimize its objective function , which depends on both the local variable and on the decision variables of the other agents . The game is then represented by the interdependent optimization problems:
(1) 
The technical problem we consider in this paper is the computation of a NE, as defined next.
Definition 1
A Nash equilibrium is a set of strategies such that, for all :
The following regularity assumptions are common for NE problems, see, e.g., [17, Ass. 1], [16, Ass. 1].
Standing Assumption 1 (Regularity and convexity)
For each , the set is nonempty, closed and convex; is continuous and the function is convex and continuously differentiable for every .
Under Standing Assumption 1, a joint strategy is a NE of the game in (1) if and only if it solves the variational inequality VI^{1}^{1}1For an operator and a set , the variational inequality VI is the problem of finding a vector such that , for all [22, Def. 1.1.1]. [22, Prop. 1.4.2], or, equivalently, if and only if, for any [22, Prop. 1.5.8],
(2) 
where is the pseudogradient mapping of the game:
(3) 
A sufficient condition for the existence of a unique NE is the strong monotonicity of the pseudogradient [22, Th. 2.3.3], as postulated next. This assumption is always used for (G)NE seeking under partialdecision information with fixed step sizes, e.g., in [16, Ass. 2], [17, Ass. 3] (while it is sometimes replaced by strict monotonicity and compactness of when allowing for vanishing step sizes [10, Ass. 2]). It implies strong convexity of the functions for every , but not necessarily (strong) convexity of in the full argument.
Standing Assumption 2
The pseudogradient mapping in (3) is strongly monotone and Lipschitz continuous, for some , : for any , and .
In our setup, each agent can only access its own cost function and feasible set . Moreover, agent does not have full knowledge of , and only relies on the information exchanged locally with neighbors over a timevarying directed communication network
. The ordered pair
belongs to the set of edges, , if and only if agent can receive information from agent at time . We denote the weighted adjacency matrix of , with if , otherwise; and the indegree and Laplacian matrices of , with ; the set of inneighbors of agent .Standing Assumption 3
For each , the graph is strongly connected.
Assumption 1
For all , the following hold:

Selfloops: for all ;

Double stochasticity: , .
Remark 1
Assumption 1(i) is intended just to ease the notation. Instead, Assumption 1(ii) is stronger. It is typically used for networked problems on undirected symmetric graphs, e.g., in [10, Ass. 6], [19, Ass. 3], [18, Ass. 3], justified by the fact that it can be satisfied by assigning the following Metropolis weights to the communication:
In practice, in the case of symmetric communication, to satisfy Assumption 1(ii), even in the case of timevarying topology, it suffices for the agents to exchange their indegree with their neighbors at every time step. Therefore, Standing Assumption 3 and Assumption 1 can be easily fulfilled for undirected graphs that are connected at each time step. For directed graphs, given any strongly connected topology, weights can be assigned such that the resulting adjacency matrix (with selfloops) is doubly stochastic, via an iterative distributed process [23]. However this can be impractical, especially if the network is timevarying.
Under Assumption 1, it holds that , for all , where denotes the second largest singular value of . Moreover, for any ,
(4) 
where is the average of . We will further assume that is bounded away from 1; this automatically holds if the networks are chosen among a finite family.
Assumption 2
There exists such that , for all .
Iii Distributed Nash equilibrium seeking
In this section, we present a pseudogradient algorithm to seek a NE of the game (1) in a fully distributed way. To cope with partialdecision information, each agent keeps an estimate of all other agents’ actions. Let , where and is agent ’s estimate of agent ’s action, for all ; also, . The agents aim at asymptotically reconstructing the true value of the opponents’ actions, based on the data received by their neighbors. The procedure is summarized in Algorithm 1. Each agents update its estimates according to consensus dynamics, then its strategy via a gradient step. We remark that each agents computes the partial gradient of its cost in its local estimates , not on the actual joint strategy .
To write the algorithm in compact form, let ; as in [17, Eq.1314], let, for all ,
(5) 
where , ; and . In simple terms, selects the th dimensional component from an dimensional vector. Thus, , and . We define the extended pseudogradient mapping as
(6) 
Therefore, Algorithm 1 reads in compact form as:
(7) 
where and .
We are now ready to prove the main result of this section.
Theorem 1
See Appendix A.
Remark 2
The parameter can always be chosen such that the condition in (9) applies. In fact, it suffices to set that satisfies the following inequalities:
(10a)  
(10b)  
(10c) 
The condition in (10a) implies that (by diagonal dominance and positivity of the diagonal elements). The inequalities in (10b)(10c) are the Sylvester’s criterion for the matrix : they impose that all the principals minors of are positive, hence . Altogether, this implies . We remark that the monomial inequality in (10c) always holds for some sufficiently small, since the constant term is . While explicit solutions are known for cubic equations, we prefer the compact representation in (10c). The bounds in (10) are not tight, and in practice, better bounds on the step size can be obtained by simply checking the Euclidean norm of the matrix . Instead, the key observation is that the conditions in (10) do not depend on the number of agents: given the parameters , , and , a constant that ensure convergence can be found independently of .
Iiia Technical discussion
In Algorithm 1, the partial gradients are evaluated on the local estimates , not on the actual strategies . Only when the estimates of all the agents coincide with the actual value, i.e., , we have that . As a consequence, the mapping is not necessarily monotone, not even under strong monotonicity of the game mapping in Standing Assumption 2. Indeed, the loss of monotonicity is the main technical difficulty arising from the partialdecision information scenario. Some works [14], [15], [16], [17], [24] deal with this issue by leveraging a restricted strong monotonicity property, which can be ensured, by opportunely choosing the parameter , for the augmented mapping , where and is the Laplacian of a fixed undirected connected network. Since the unique solution of the VI is , with the unique NE of the game in (1) [16, Prop. 1], one can design NE seeking algorithms via standard solution methods for variational inequalities (or the corresponding monotone inclusions, [17]). For instance, in [16], a forwardbackward algorithm [22, 12.4.2] is proposed to solve VI, resulting in the algorithm
(11) 
We also recover this iteration when considering [17, Alg. 1] in the absence of coupling constraints. The drawback is that exploiting the monotonicity of results in conservative theoretical upper bounds on the parameters and , and consequently in slow convergence (see §IVV). More recently, the authors of [18] studied the convergence of (11) based on contractivity properties of the iterates, in the case of a fixed undirected network with doubly stochastic adjacency matrix , unconstrained action sets (i.e., ), and by fixing , which results in the algorithm:
(12) 
While this algorithm requires only one parameter to be set, the upper bound on provided in [18, Th. 1] is decreasing to zero when the number of agents grows unbounded (in contrast with the one in our Theorem 1, see Remark 2).
Iv Balanced directed graphs
In this section, we relax Assumption 1 to the following.
Assumption 3
For all , the communication graph is weight balanced: .
For weightbalanced digraphs, indegree and outdegree of each node coincide. Therefore, the matrix is itself the symmetric Laplacian of an undirected graph. Besides, such a graph is connected by Standing Assumption 3; hence has a simple eigenvalue in , and the others are positive, i.e., .
Assumption 4
There exist , such that and , for all .
Remark 3
To seek a NE over switching balanced digraphs, we propose the iteration in Algorithm 2. In compact form, it reads as
(13) 
where . Clearly, (13) is the same scheme of (11), just adapted to take the switching topology into account. In fact, the proof of convergence of Algorithm 2 is based on a restricted strong monotonicity property of the operator
(14) 
that still holds for balanced directed graphs, as we show next.
Theorem 2
See Appendix B.
Initialization: for all , set , .
Iterate until convergence: for all ,
V Numerical example: A NashCournot game
We consider the NashCournot game in [17, §6]. firms produces a commodity that is sold to markets. Each firm is only allowed to participate in of the markets. The decision variables of each firm are the quantities of commodity to be delivered to these markets, bounded by the local constraints . Let , where is the matrix that expresses which markets firm participates in. Specifically, the th column of has its th element equal to if is the amount of product sent to the th market by agent , for all ; all the other elements are . Therefore, is the vector of the quantities of total product delivered to each market. Firm aims at maximizing its profit, i.e., minimizing the cost function Here, is firm ’s production cost, with , , . Instead, associates to each market a price that depends on the amount of product delivered to that market. Specifically, the price for the market , for , is , where , . We set , . The market structure is defined as in [17, Fig. 1], that defines which firms are allowed to participate in which market. Therefore, and
. We select randomly with uniform distribution
in , diagonal with diagonal elements in , in , in , in , in , for all , . The resulting setup satisfies Standing Assumptions 12 [17, §VI]. The firms cannot access the production of all the competitors, but can communicate with some neighbors on a network.1em
We first consider the case of a fixed, undirected graph, under Assumption 1. Algorithm 2 in this case reduces to [16, Alg. 1] or, in the absence of coupling constraints, to
[17, Alg. 1]. We compare Algorithms 12 with the inexact ADMM in [15] and the accelerated gradient method in [16], for the step sizes that ensure convergence. Specifically, we set as in Theorem 1 for Algorithm 1. The convergence of all the other Algorithms is based on the monotonicity of in (14); hence we set as in Theorem 2. Instead of using the conservative bounds in (15) for the strong monotonicity and Lipschitz constants of , and , we obtain a better result by computing the exact values numerically. is (nonrestricted) strongly monotone for our parameters, hence also the convergence result for [16, Alg. 2 ] holds. Figure 1 shows that Algorithm 1 outperforms all the other methods (we also remark that the accelerated gradient in [16, Alg. 2] requires two projections and two communications per iterations).
As a numerical example, we also compare Algorithm 1 with the scheme in (12) by removing the local constraints, in Figure 2.
For the case of doubly stochastic timevarying networks, we randomly generate connected graphs and for each iteration we pick one with uniform distribution. In Figure 3, we compare the performance of Algorithms 12, for step sizes set to their best theoretical values as in Theorems 12.
Finally, in Figure 4,
we test Algorithm 2 when the communication topology is switching between two balanced directed graphs: the unweighted directed ring, where each agent can send information to the agent (with the convention ), and a second graph, where agent is also allowed to transmit to agent , for all .
Vi Conclusion
Nash equilibrium problems on timevarying graphs can be solved with linear rate via fixedstep pseudogradient algorithms, if the network is connected at every iteration and the game mapping is Lipschitz continuous and strongly monotone. Our algorithm proved much faster than the existing gradientbased methods, even in the case of a fixed communication topology, at least in our numerical experience. The extension of our results to games with coupling constraints is left as future research. It would be also valuable to relax the connectivity conditions.
a Proof of Theorem 1
We define the estimate consensus subspace and its orthogonal complement . Thus, any vector can be written as , where , , and . Also, we use the shorthand notation and in place of and . We recast the iteration in (7) as
(16) 
Let be the unique NE of the game in (1), and . We recall that by (2), and then . Moreover, ; hence is a fixed point for (16). Let and . Thus, it holds that
(17)  
where the first inequality follows by nonexpansiveness of the projection ([20, Prop. 4.16]), and to bound the addends in (17) we used, in the order:

[leftmargin=*]

3^{rd} term: , Lipschitz continuity of , and ;

4^{th} term: ;

5^{th} term: ;

6^{th} term: Lipschitz continuity of ;

7^{th} term: as above.
Besides, for every and for all , it holds that , where , by doubly stochasticity of , and by (4) and properties of the Kronecker product. Therefore we can finally write, for all , for all ,
(18) 
where the symmetric matrix is as in (8).