# Projected-gradient algorithms for generalized equilibrium seeking in Aggregative Games are preconditioned Forward-Backward methods

We show that projected-gradient methods for the distributed computation of generalized Nash equilibria in aggregative games are preconditioned forward-backward splitting methods applied to the KKT operator of the game. Specifically, we adopt the preconditioned forward-backward design, recently conceived by Yi and Pavel in the manuscript "A distributed primal-dual algorithm for computation of generalized Nash equilibria via operator splitting methods" for generalized Nash equilibrium seeking in aggregative games. Consequently, we notice that two projected-gradient methods recently proposed in the literature are preconditioned forward-backward methods. More generally, we provide a unifying operator-theoretic ground to design projected-gradient methods for generalized equilibrium seeking in aggregative games.

## Authors

• 10 publications
• 35 publications
• ### Distributed forward-backward (half) forward algorithms for generalized Nash equilibrium seeking

We present two distributed algorithms for the computation of a generaliz...
10/30/2019 ∙ by Barbara Franci, et al. ∙ 0

• ### An asynchronous, forward-backward, distributed generalized Nash equilibrium seeking algorithm

In this paper, we propose an asynchronous distributed algorithm for the ...
01/14/2019 ∙ by Carlo Cenedese, et al. ∙ 0

• ### A damped forward-backward algorithm for stochastic generalized Nash equilibrium seeking

We consider a stochastic generalized Nash equilibrium problem (GNEP) wit...
10/25/2019 ∙ by Barbara Franci, et al. ∙ 0

• ### Backward induction for repeated games

We present a method of backward induction for computing approximate subg...
04/19/2018 ∙ by Jules Hedges, et al. ∙ 0

• ### Nash equilibrium seeking in potential games with double-integrator agents

In this paper, we show the equivalence between a constrained, multi-agen...
11/19/2018 ∙ by Filippo Fabiani, et al. ∙ 0

• ### Source Seeking Control of Unicycle Robots with 3D-printed Flexible Piezoresistive Sensors

We present the design and experimental validation of source seeking cont...
04/29/2021 ∙ by Tinghua Li, et al. ∙ 0

• ### The Computation of Approximate Generalized Feedback Nash Equilibria

We present the concept of a Generalized Feedback Nash Equilibrium (GFNE)...
01/08/2021 ∙ by Forrest Laine, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## I Introduction

Aggregative game theory

[1] is a mathematical framework to model the interdependent optimal decision making problems for a set of noncooperative agents, whenever the decision of each agent is affected by some aggregate effect of all the agents. This feature emerges in several application areas, such as demand side management in the smart grid [2], e.g. for electric vehicles [3, 4] and thermostatically controlled loads [5, 6], demand response in competitive markets [7] and network congestion control [8].

Existence and uniqueness of Nash equilibria in (aggregative) noncooperative games is well established in the literature of operation research [9], [10, §12], and automatic control [11, 12]. For the computation of a game equilibrium, several algorithms are available, both distributed protocols [13, 14] and semi-decentralized schemes [15, 16, 17, 18]. Among these, an elegant approach is to characterize the desired equilibrium solutions as the zeros of an operator, possibly monotone, e.g. the concatenation of interdependent Karush–Kuhn–Tucker operators, and in turn formulate an equivalent fixed-point problem, which is solved via appropriate fixed-point iterations. Overall, the available methods can ensure global convergence to an equilibrium if the coupling among the cost functions of the agents is “well behaved”, e.g. if the problem data are convex and the so-called pseudo-gradient game mapping is (strictly, strongly) monotone [19].

A popular class of algorithms for Nash equilibrium seeking is that of projected-gradient algorithms [20, §12], [14, 16, 19]. Whenever the pseudo-gradient game mapping is strongly monotone, projected-gradient algorithms can ensure fast convergence to a Nash equilibrium, possibly via distributed computation and information exchange. It follows that projected-gradient methods have the potential to be fast, simple and scalable with respect to the population size. At the same time, in the context of Nash equilibrium seeking, the convergence analyses for the available projected-gradient methods are quite diverse in nature.

In this paper, we aim at a unifying convergence analysis for projected-gradient algorithms that are adopted for the computation of generalized Nash equilibria in aggregative games. Specifically, we adopt a general perspective based on monotone operator theory [21] to show that projected-gradient algorithms with sequential updates belong to the class of preconditioned forward-backward splitting methods, introduced in [22] for multi-agent network games.

The main technical contribution of the paper is to conceive a design procedure for the preconditioned forward-backward splitting method. The proposed design is based not only on the splitting of the monotone operator whose zeros are the game equilibria, but also on the choice of the so-called preconditioning matrix, which induces the quadratic norm adopted to show global convergence of the resulting algorithm. Since the convergence characterization of the forward-backward splitting method is well established, the advantage of the proposed design is that global convergence follows provided that some mild monotonicity assumptions on the problem data are satisfied.

Remarkably, we discover that two recent projected-gradient algorithms for Nash equilibrium seeking in aggregative games, [16] and [14], can be equivalently written as preconditioned forward-backward splitting methods with symmetric preconditioning matrix, despite their algorithmic formulation is “asymmetric”.

### Basic notation

denotes the set of real numbers, and the set of extended real numbers. (

) denotes a matrix/vector with all elements equal to

(); to improve clarity, we may add the dimension of these matrices/vectors as subscript. denotes the Kronecker product between matrices and ;

denotes the maximum singular value of

;

denotes the set of eigenvalues of

. Given vectors , .

### Operator theoretic definitions

denotes the identity operator. The mapping denotes the indicator function for the set , i.e., if , otherwise. For a closed set , the mapping denotes the projection onto , i.e., . The set-valued mapping denotes the normal cone operator for the the set , i.e., if , otherwise. For a function , ; denotes its subdifferential set-valued mapping, defined as ; A set-valued mapping is -Lipschitz continuous, with , if for all , , ; is (strictly) monotone if for all , , ; is -strongly monotone, with , if for all , , ; is -averaged, with , if , for all ; is -cocoercive, with , if is -averaged. With , we denote the resolvent operator of , which is -averaged if and only if is monotone; and denote the set of fixed points and of zeros, respectively.

## Ii Generalized aggregative games

### Ii-a Mathematical formulation

We consider a set of noncooperative agents, where each agent shall choose its decision variable (i.e., strategy) from the local decision set with the aim of minimizing its local cost function , which depends on both the local variable (first argument) and on the decision variables of the other agents, (second argument).

We focus on the class of aggregative games, where the cost function of each agent depends on the local decision variable and on the value of the aggregation function , with . In particular, we consider average aggregative games, where the aggregation function is the average function, i.e.,

 σ(x):=Mx=1N∑Ni=1xi, hence M:=1N1⊤N⊗In. (1)

Thus, for each , there is a function such that the local cost function can be written as

 Ji(xi,x−i)=:fi(xi,σ(x)). (2)

Furthermore, we consider generalized games, where the coupling among the agents arises not only via the cost functions, but also via their feasible decision sets. In our setup, the coupling constraints are described by an affine function, , where and . Thus, the collective feasible set, , reads as

 X=Ω∩{y∈RnN|Ay−b≤0m}; (3)

while the feasible decision set of each agent is characterized by the set-valued mapping , defined as where and . The set represents the local decision set for agent , while the matrix defines how agent is involved in the coupling constraints. For instance, the shared constraints in (3) may contain a sparsity pattern that can be defined via a graph, where each agent has a set of “neighbors” with whom to share some constraints.

###### Remark 1 (Affine coupling constraint)

Affine coupling constraints as considered in this paper are very common in the literature of noncooperative games, see [16], [17], [22], [23]. Moreover, we recall that the more general case with separable convex coupling constraints can be reformulated as game via affine coupling constraints [24, Remark 2].

Next, let us postulate standard convexity and compactness assumptions for the constraint sets, convexity and differentiability assumptions for the local cost functions.

###### Standing Assumption 1 (Convex differentiable functions)

For each and the function is convex and continuously differentiable.

###### Standing Assumption 2 (Compact convex constraints)

For each , the set is nonempty, compact and convex. The set satisfies the Slater’s constraint qualification.

In summary, the aim of each agent , given the decision variables of the other agents, , is to choose a strategy, , that solves its local optimization problem according to the game setup previously described, i.e.,

 ⎧⎪⎨⎪⎩minxi∈ΩiJi(xi,x−i) s.t. Aixi≤b−∑Nj≠iAjxj∀i∈N. (4)

From the game-theoretic perspective, we consider the problem to compute a Nash equilibrium, as formalized next.

###### Definition 1 (Generalized Nash equilibrium)

The collective strategy is a generalized Nash equilibrium (GNE) of the game in (4) if and for all

In other words, a set of strategies is a Nash equilibrium if no agent can improve its objective function by unilaterally changing its strategy to another feasible one.

Under Assumptions 12, the existence of a GNE of the game in (4) follows from Brouwer’s fixed-point theorem [10, Proposition 12.7], while uniqueness does not hold in general.

### Ii-B Variational equilibria and pseudo-gradient of the game

Within all the possible Nash equilibria, we focus on an important subclass of equilibria, with some relevant structural properties, such as “larger social stability” and “economic fairness” [9, Theorem 4.8], that corresponds to the solution set of an appropriate variational inequality. Let us first formalize the notion of variational inequality problem.

###### Definition 2 (Generalized variational inequality)

Consider a closed convex set , a set-valued mapping , and a single-valued mapping . The generalized variational inequality problem GVI, is the problem to find and such that

 (x−x∗)⊤g∗≥0 forallx∈S.

If for all , then GVI reduces to the variational inequality problem VI.

A fundamental mapping in a noncooperative game is the so-called pseudo-gradient, , defined as

 F(x) :=col({∂xiJi(xi,x−i)}i∈N). (5)

Namely, the mapping is obtained by stacking together the subdifferentials of the agents’ objective functions with respect to their local decision variables.

Under Assumptions 12, it follows by [10, Proposition 12.4] that any solution of GVI is a Nash equilibrium of the game in (4). The inverse implication is not true in general, and actually in passing from the Nash equilibrium problem to the GVI problem most solutions are lost [10, §12.2.2]; indeed, a game may have a Nash equilibrium while the corresponding GVI has no solution. Note that, since the cost functions are differentiable (by Assumption 2), then GVI reduces to VI, which is commonly addressed in the context of game theory via projected gradient algorithms [14, 16, 19], [20, §12].

Under the postulated standing assumptions, it is shown in [10, Proposition 12.11] that a sufficient condition for the existence (and uniqueness) of a variational GNE of the game in (4) is the (strict) monotonicity of the pseudo-gradient in (5). Thus, let us assume strongly monotonicity of .

###### Standing Assumption 3 (Strong monotonicity)

The pseudo-gradient in (5) is -strongly monotone and -Lipschitz continuous, for some constants .

## Iii Generalized Nash equilibrium as zero of the sum of two monotone operators

In this section, we exploit operator theory to recast the Nash equilibrium seeking problem into a monotone inclusion, namely, the problem of finding a zero of a set-valued monotone operator. As first step, we characterize a GNE of the game in terms of KKT conditions of the coupled optimization problems in (4). For each agent , let us introduce the Lagrangian function , defined as

 Li(x,λi):=Ji(xi,x−i)+ιΩi(xi)+λ⊤i(Ax−b),

where is the Lagrangian multiplier associated with the coupling constraints. It follows from [10, §12.2.3] that the set of strategies is a GNE of the game in (4) if and only if the following coupled KKT conditions are satisfied:

 {0∈∂xiJi(x∗i,x∗−i)+NΩi(x∗i)+A⊤iλi0≤λi⊥−(Ax∗−b)≥0∀i∈N (6)

The constraint qualification in Assumption 2 is needed at this stage to ensures boundedness of the dual variables ’s.

In a similar fashion, we characterize a variational GNE in term of KKT conditions by exploiting the Lagrangian duality scheme for the corresponding VI problem, see [25, §3.2]. Specifically, is a solution of VI if and only if Then, the associated KKT optimality conditions read as

 {0∈∂xiJi(x∗i,x∗−i)+NΩi(x∗i)+A⊤iμ, ∀i∈N0≤μ⊥−(Ax∗−b)≥0. (7)

To cast (7) in compact form, we introduce the set-valued mapping , defined as

 T:[xμ]↦[NΩ(x)+F(x)+A⊤μNRm≥0(μ)−(Ax−b)], (8)

Essentially, the role of the mapping is that its zeros correspond to the variational generalized Nash equilibria of the game in (4), as formalized in the next statement.

###### Proposition 1 ([18, Th. 1])

The collective strategy is a variational GNE of the game in (4) if and only if there exists such that . Moreover, if , then satisfies the KKT conditions in (6) with Lagrangian multipliers for all .

To conclude this section, we note that the mapping can be written as the sum of two operators, defined as

 A:[xλ] ↦[F(x)b], (9) B:[xλ] ↦[NΩ(x)NRm≥0(μ)]+[0A⊤−A0][xλ]. (10)

The formulation is called splitting of , and will be exploited in different ways later on. We show next that the mappings and are both monotone, which paves the way for splitting algorithms.

###### Lemma 1

The mapping in (10) is maximally monotone and in (9) is -cocoercive.

First, consider in (10). The first term is maximally monotone, since normal cones of closed convex sets are maximally monotone and the concatenation preserves maximality [21, Prop. 20.23]; the second term

is linear and skew symmetric, i.e.,

, thus maximally monotone [21, Ex. 20.30]. Then, the maximal monotonicity of follows from [21, Cor. 24.4], since . To prove that is -cocoercive, we note that for all , it holds that

 ⟨A(ω1)−A(ω),ω1−ω2⟩=⟨F(x1)−F(x2),x1−x2⟩≥η∥x1−x2∥2≥ηℓ2F∥F(x1)−F(x2)∥2=ηℓ2F∥A(ω1)−A(ω2)∥2.

The first and second inequalities follow from the -strong monotonicity and -Lipschitz continuity, respectively, of the mapping , postulated in Assumption 3.

## Iv Preconditioned Forward-Backward splitting

In light of Lemma 1, the forward-backward (FB) splitting [21, §25.3] guarantees convergence to a zero of . In this section, we discuss a design procedure for FB algorithms, which is particularly useful when the resolvent of the operator cannot be computed explicitly. Moreover, we show that two existing algorithms for GNE seeking in aggregative games belong to this class of algorithms.

### Iv-a Preconditioned Forward-Backward: Design Procedure

The main idea of the FB splitting is that the zeros of the mapping in (8) correspond to the fixed points of a certain operator which depends on the chosen splitting (10)(9), as formalized next.

###### Lemma 2

For any matrix , the following equivalence holds:

 ω∈zer(A+B)⇔ω∈fix(VΦ∘UΦ), (11)

where and .

Consider a vector , then

 0∈(A+B)(ω) ⇔0∈Φ−1(A+B)(ω) ⇔(Id−Φ−1A)(ω)∈(Id+Φ−1B)(ω) ⇔ω=VΦ∘UΦ(ω),

where the first equivalence holds since .

The FB algorithm is the Banach–Picard iteration [21, (1.67)] applied to the mappings in (11), i.e.,

 ωk+1=(Id+Φ−1B)−1∘(Id−Φ−1A)(ωk). (12)

In numerical analysis, represents a forward step with size and direction defined by , while represents a backward step. Directly from the iteration in (12), we have that

 (Id−Φ−1A)(ωk) ∈(Id+Φ−1B)(ωk+1)⇔ −A(ωk) ∈B(ωk+1)+Φ(ωk+1−ωk). (13)

The choice of the preconditioning matrix in (13) plays a key role in the algorithm design. Next, we provide some general guidelines to design .

Design guidelines for the preconditioning matrix :

1. , (necessary);

2. in (13) explicitly computable (necessary);

3. (convenient convergence analysis);

4. iterations in (12) sequential (convenient implementation).

Without loss of generality, we denote , then the inclusion in (13) reads in expanded form as

 (14)

Consider a symmetric matrix , designed accordingly to the guidelines above, i.e.,

 Φs: =[α−1−A⊤−Aγ−1Im], (15)

where and the coefficients (step sizes) and are chosen such that has positive eigenvalues (guideline 1), as formalized in the next lemma.

###### Lemma 3

The matrix in (15) is positive definite if

 γ<(∥A∥2αi,max)−1,γ,αi,min>0, (16)

where and .

The conditions in (16) directly follow by applying the the Schur’s complement on in (15). Now, we present the preconditioned Forward-Backward (pFB) algorithm associated with , which is the Banach–Picard iteration in (12) for .

Algorithm 1: Preconditioned Forward Backward (pFB)

###### Remark 2

The iterations of Algorithm 1 are sequential (guideline 4), namely, the multiplier update, , exploits the most recent value of the agents’ strategies, .

In the next statement, we show the convergence of Algorithm 1 to a variational generalized Nash equilibrium, under suitable choices of the step sizes.

###### Theorem 1 (Global convergence of pFB)

The sequence defined by Algorithm 1, with step sizes , for all , and , with , globally converges to some , with as in (8).

See Section V.

### Iv-B The Asymmetric Projection Algorithm in [16, Alg. 1] is a preconditioned Forward-Backward splitting

We note that Algorithm 1 (with equal step sizes) corresponds to the “asymmetric” projected algorithm (APA) proposed in [16, Alg. 1]. Therein, the algorithm design and its convergence analysis rely on a variational inequality formulation of the Nash equilibrium problem. Specifically, the authors define the convex set and the monotone mapping

 R:[xλ]↦[F(x)b]+[0A⊤−A0][xλ],

and characterize the GNE as solutions of VI. Then, to solve VI in a semi-decentralized fashion, the authors propose an asymmetric implementation of the projection algorithm for variational inequalities [20, 12.5.1], in which each iteration is computed as

 ωk+1= solution to VI(C,RkD), (17)

where and

 D:=[τ−1I0−2Aτ−1I]. (18)

If the parameter in is chosen such that , then the unique solution in (17) is , where is the projection operator characterized by the asymmetric matrix in (18). Thus, the iteration in (17) equivalently reads as

 (19)

which is nothing but a pFB associated with the splitting and the preconditioning matrix .

###### Remark 3

With (19), we showed that pFB algorithms based on different splittings and preconditioning matrices can lead to the same algorithm. From an operator-theoretic perspective, the convergence analysis is more convenient for the pFB with symmetric matrix (guideline 3), since the properties that , have with the standard inner product are preserved for , with inner product , as shown in Section V.

### Iv-C The distributed algorithm for aggregative games on graphs in [14, §3] is a preconditioned Forward-Backward

In this section, we show that the synchronous distributed algorithm for NE seeking in network aggregative games proposed in [14, §3] can be written as a pFB splitting.

In [14], the authors consider an aggregative game without coupling constraints, i.e., for all , and wherein the agents have no access to the aggregate decision (1

), but build an estimate of it by communicating over an undirected network with their neighboring agents.

Specifically, let be the set of underlying undirected edges between agents; let denote the set of neighbors of agent , with the convention that ; let be the degree matrix, where ; let be the adjacency matrix, such that if , otherwise. let be the Laplacian matrix and let us define the matrix , with , such that, given a vector , with , then

Let be the extension of in (5) to the augmented space of actions and estimates, defined as

 Fσ(x,z) :=col({∂xifi(xi,zi)}i∈N) (20)

Note that . Next, we present a static version of the algorithm in [14, §3], whose convergence to a NE is established in [14, Prop. 2].

Algorithm 2: Koshal–Nedić–Shanbhag algorithm

 xk+1 vk+1 =projRnN[(W⊗In)vk+xk+1−xk]

In the following statement, we show that Algorithm 2 is a pFB splitting with symmetric preconditioning matrix.

###### Proposition 2

Let the mappings , and the preconditioning matrix be defined as

 A :[xσ]→[Fσ(x,σ)0]+12[0−PP0][xσ], (21) B :[xσ]→[NΩ(x)Lnσ]+12[0P−P0][xσ], (22) Φ :=[α−1I−12P−12PP], (23)

where and . Then, the sequence generated by the pFB in (12), with , , as in (21)(23), corresponds to the sequence generated by Algorithm 2.

The iteration in Alg. 2 can be derived by solving the inclusion (13) with , , as in (21)(23), for and and noticing that .

## V Convergence Analysis

First, we show that the properties the mappings , have with the standard inner product are preserved for , with the inner product .

###### Lemma 4

The mappings , , , satisfy the following properties in the -induced norm:

1. is -cocoercive and is -averaged, where
.

2. is maximally monotone and is -averaged.

(i): We need to show that for all the following condition holds:

 ⟨Φ−1sA(ω1)−Φ−1sA(ω2),ω1−ω2⟩Φs≥β∥∥Φ−1sA(ω1)−Φ−1sA(ω2)∥∥2Φs. (24)

We first provide an upper bound for the right hand side of (24). Let us denote for , then

 ∥∥Φ−1sA(ω1)−Φ−1sA(ω2)∥∥2Φs =⟨ ΦsΦ−1s(A(ω1)−A(ω2)), Φ−1s(A(ω1)−A(ω2))⟩= [F(x1)−F(x2)0]⊤⎡⎢⎣[Φ−1s]11[Φ−1s% ]1221[Φ−1s]22⎤⎥⎦[F(x1)−F(x2)0] =∥F(x1)−F(x2)∥[Φ−1s]11 ≤∥∥[Φ−1s]11∥∥2∥F(x1)−F(x2)∥2 =∥∥[Φ−1s]11∥∥2∥A(ω1)−A(ω2)∥2, (25)

where is symmetric and positive definite if the step sizes , are chosen as in Lemma 3. Moreover, it holds that , where is the smallest eigenvalue of . Now, we exploit the -cocoercivity of and the upper bound in (25) to define the cocoercivity constant in (24).

 ⟨Φ−1sA(ω1)−Φ−1sA(ω2),ω1−ω2⟩Φs =⟨A(ω1)−A(ω2),ω1−ω2⟩ ≥ηℓ2F∥A(ω1)−A(ω2)∥2 ≥ηℓ2Fλmin([Φ−1s]−111)∥∥Φ−1sA(ω1)−Φ−1sA(ω2)∥∥2Φs.

Thus, the mapping is cocoercive with constant w.r.t. the -induced norm. Since is -cocoercive, it follows from [21, Prop. 4.33] that is -averaged. (ii): Since is maximally monotone by Lemma (1) and is positive definite, if the step sizes are chosen as in Lemma 3, then the maximal monotonicity of follows from [26, Lemma 3.7].

Next, we show that the FB operator is averaged if the step sizes are chosen small enough.

###### Lemma 5

The FB operator in (11), with , is -averaged, with , if

 γ<1∥A∥2(1αi,max−12η/ℓ2F),αi,max<2ηℓ2F. (26)

Moreover, if for all , then (26) reads as

 γ<−1+√1+∥A∥2(4η/ℓ2F)2∥A∥2(4η/ℓ2F). (27)

By Lemma 4, and are averaged with constants and , respectively. If , then is -averaged with , by [27, proposition 2.4]. To conclude, we note that the following condition implies that :

 β=ηℓ2Fλmin[(α−1−γA⊤A)−1]≥ηℓ2F(1αi,max−γ∥A∥2)>12, (28)

where the second inequality in (28) holds for step sizes chosen as in (26). Moreover, if the step sizes are equal, i.e., for all , then holds for as in (27). We can now prove the convergence of Algorithm 1.

Proof of Theorem 1: The iterations in Alg. 1 are obtained explicitly by substituting into (14) and solving for , . Thus, Alg. 1 is the Banach–Picard iteration of the mapping , which is -averaged, with , by Lemma 5, if the step sizes satisfy (26). The convergence of the sequence generated by the Banach–Picard iteration of to follows by [21, Prop. 15.5].

###### Remark 4

The upper bounds in Lemma 5 are increasing functions of the cocoercivity constant of in (5). In particular, the upper bound for the case with equal step sizes in (27) is tighter than that obtained in [16, Theorem 2].

## Vi Conclusion

By monotone operator theory, projected-gradient methods for generalized Nash equilibrium seeking in aggregative games are preconditioned forward-backward splitting methods, whose convergence has been established for problems with strongly monotone pseudo-gradient mapping.

## References

• [1] N. S. Kukushkin, “Best response dynamics in finite games with additive aggregation,” Games and Economic Behavior, vol. 48, no. 1, pp. 94–10, 2004.
• [2] W. Saad, Z. Han, H. Poor, and T. Başar, “Game theoretic methods for the smart grid,” IEEE Signal Processing Magazine, pp. 86–105, 2012.
• [3] F. Parise, M. Colombino, S. Grammatico, and J. Lygeros, “Mean field constrained charging policy for large populations of plug-in electric vehicles,” in Proc. of the IEEE Conference on Decision and Control, Los Angeles, California, USA, 2014, pp. 5101–5106.
• [4] Z. Ma, S. Zou, L. Ran, X. Shi, and I. Hiskens, “Efficient decentralized coordination of large-scale plug-in electric vehicle charging,” Automatica, vol. 69, pp. 35–47, 2016.
• [5] S. Grammatico, B. Gentile, F. Parise, and J. Lygeros, “A mean field control approach for demand side management of large populations of thermostatically controlled loads,” in Proc. of the IEEE European Control Conference, Linz, Austria, 2015.
• [6] S. Li, W. Zhang, J. Lian, and K. Kalsi, “Market-based coordination of thermostatically controlled loads - Part I: A mechanism design formulation,” IEEE Trans. on Power Systems, vol. 31, no. 2, pp. 1170–1178, 2016.
• [7] N. Li, L. Chen, and M. A. Dahleh, “Demand response using linear supply function bidding,” IEEE Transactions on Smart Grid, vol. 6, no. 4, pp. 1827–1838, 2015.
• [8] J. Barrera and A. Garcia, “Dynamic incentives for congestion control,” IEEE Trans. on Automatic Control, vol. 60, no. 2, pp. 299–310, 2015.
• [9] F. Facchinei and C. Kanzow, “Generalized nash equilibrium problems,” Annals of Operations Research, vol. 175, no. 1, pp. 177–211, 2010.
• [10] D. P. Palomar and Y. C. Eldar, Convex optimization in signal processing and communications.   Cambridge university press, 2010.
• [11] L. Pavel, “An extension of duality to a game-theoretic framework,” Automatica, vol. 43, pp. 226 –237, 2007.
• [12] A. A. Kulkarni and U. Shanbhag, “On the variational equilibrium as a refinement of the generalized Nash equilibrium,” Automatica, vol. 48, pp. 45 –55, 2012.
• [13] F. Salehisadaghiani and L. Pavel, “Distributed Nash equilibrium seeking: A gossip-based algorithm,” Automatica, vol. 72, pp. 209–216, 2016.
• [14] J. Koshal, A. Nedić, and U. Shanbhag, “Distributed algorithms for aggregative games on graphs,” Operations Research, vol. 64, no. 3, pp. 680–704, 2016.
• [15] S. Grammatico, F. Parise, M. Colombino, and J. Lygeros, “Decentralized convergence to Nash equilibria in constrained deterministic mean field control,” IEEE Trans. on Automatic Control, vol. 61, no. 11, pp. 3315–3329, 2016.
• [16] D. Paccagnan, B. Gentile, F. Parise, M. Kamgarpour, and J. Lygeros, “Distributed computation of generalized Nash equilibria in quadratic aggregative games with affine coupling constraints,” in Proc. of the IEEE Conf. on Decision and Control, Las Vegas, USA, 2016.
• [17] S. Grammatico, “Dynamic control of agents playing aggregative games with coupling constraints,” IEEE Trans. on Automatic Control, vol. 62, no. 9, pp. 4537 – 4548, 2017.
• [18] G. Belgioioso and S. Grammatico, “Semi-decentralized Nash equilibrium seeking in aggregative games with coupling constraints and non-differentiable cost functions,” IEEE Control Systems Letters, vol. 1, no. 2, pp. 400–405, 2017.
• [19] ——, “On convexity and monotonicity in generalized aggregative games,” IFAC-PapersOnLine, vol. 50, no. 1, pp. 14 338–14 343, 2017.
• [20] F. Facchinei and J. Pang, Finite-dimensional variational inequalities and complementarity problems.   Springer Verlag, 2003.
• [21] H. H. Bauschke and P. L. Combettes, Convex analysis and monotone operator theory in Hilbert spaces.   Springer, 2011.
• [22] P. Yi and L. Pavel, “A distributed primal-dual algorithm for computation of generalized nash equilibria via operator splitting methods,” in 56th IEEE Annual Conference on Decision and Control, CDC 2017, Melbourne, Australia, December 12-15, 2017, 2017, pp. 3841–3846.
• [23] S. Liang, P. Yi, and Y. Hong, “Distributed nash equilibrium seeking for aggregative games with coupled constraints,” Automatica, vol. 85, pp. 179–185, 2017.
• [24] S. Grammatico, “Proximal dynamics in multi-agent network games,” IEEE Transactions on Control of Network Systems, 2017.
• [25] A. Auslender and M. Teboulle, “Lagrangian duality and related multiplier methods for variational inequality problems,” SIAM Journal on Optimization, vol. 10, no. 4, pp. 1097–1115, 2000.
• [26] P. L. Combettes and B. C. Vũ, “Variable metric forward–backward splitting with applications to monotone inclusions in duality,” Optimization, vol. 63, no. 9, pp. 1289–1318, 2014.
• [27] P. L. Combettes and I. Yamada, “Compositions and convex combinations of averaged nonexpansive operators,” Journal of Mathematical Analysis and Applications, pp. 55–70, 2015.