# On the convergence of discrete-time linear systems: A linear time-varying Mann iteration converges iff the operator is strictly pseudocontractive

We adopt an operator-theoretic perspective to study convergence of linear fixed-point iterations and discrete- time linear systems. We mainly focus on the so-called Krasnoselskij-Mann iteration x(k+1) = ( 1 - α(k) ) x(k) + α(k) A x(k), which is relevant for distributed computation in optimization and game theory, when A is not available in a centralized way. We show that convergence to a vector in the kernel of (I-A) is equivalent to strict pseudocontractiveness of the linear operator x -> Ax. We also characterize some relevant operator-theoretic properties of linear operators via eigenvalue location and linear matrix inequalities. We apply the convergence conditions to multi-agent linear systems with vanishing step sizes, in particular, to linear consensus dynamics and equilibrium seeking in monotone linear-quadratic games.

There are no comments yet.

## Authors

• 9 publications
• 9 publications
• 1 publication
• 34 publications
• ### Distributed payoff allocation in coalitional games via time varying paracontractions

We present a partial operator-theoretic characterization of approachabil...
11/28/2019 ∙ by Aitazaz Ali Raja, et al. ∙ 0

• ### Time-varying constrained proximal type dynamics in multi-agent network games

In this paper, we study multi-agent network games subject to affine time...
11/19/2019 ∙ by Carlo Cenedese, et al. ∙ 0

• ### Towards time-varying proximal dynamics in Multi-Agent Network Games

Distributed decision making in multi-agent networks has recently attract...
11/11/2018 ∙ by Carlo Cenedese, et al. ∙ 0

• ### Asynchronous and time-varying proximal type dynamics multi-agent network games

In this paper, we study proximal type dynamics in the context of noncoop...
09/24/2019 ∙ by Carlo Cenedese, et al. ∙ 0

• ### Distributed Online Linear Quadratic Control for Linear Time-invariant Systems

Classical linear quadratic (LQ) control centers around linear time-invar...
09/29/2020 ∙ by Ting-Jui Chang, et al. ∙ 0

• ### Approximately Solving Mean Field Games via Entropy-Regularized Deep Reinforcement Learning

The recent mean field game (MFG) formalism facilitates otherwise intract...
02/02/2021 ∙ by Kai Cui, et al. ∙ 0

• ### Finite-Time Analysis and Restarting Scheme for Linear Two-Time-Scale Stochastic Approximation

Motivated by their broad applications in reinforcement learning, we stud...
12/23/2019 ∙ by Thinh T. Doan, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## I Introduction

State convergence is the quintessential problem in multi-agent systems. In fact, multi-agent consensus and cooperation, distributed optimization and multi-player game theory revolve around the convergence of the state variables to an equilibrium, typically unknown a-priori. In distributed consensus problems, agents interact with their neighboring peers to collectively achieve global agreement on some value [1]. In distributed optimization, decision makers cooperate locally to agree on primal-dual variables that solve a global optimization problem [2]. Similarly, in multi-player games, selfish decision makers exchange local or semi-global information to achieve an equilibrium for their inter-dependent optimization problems [3]. Applications of multi-agent systems with guaranteed convergence are indeed vast, e.g. include power systems [4, 5], demand side management [6], network congestion control [7, 8], social networks [9, 10], robotic and sensor networks [11, 12].

From a general mathematical perspective, the convergence problem is a fixed-point problem [13], or equivalently, a zero-finding problem [14]. For example, consensus in multi-agent systems is equivalent to finding a collective state in the kernel of the Laplacian matrix, i.e., in operator-theoretic terms, to finding a zero of the Laplacian, seen as a linear operator.

Fixed-point theory and monotone operator theory are then key to study convergence to multi-agent equilibria [15]. For instance, Krasnoselskij–Mann fixed-point iterations have been adopted in aggregative game theory [16, 17], monotone operator splitting methods in distributed convex optimization [18] and monotone game theory [19, 3, 20]. The main feature of the available results is that sufficient conditions on the problem data are typically proposed to ensure global convergence of fixed-point iterations applied on nonlinear mappings, e.g. compositions of proximal or projection operators and linear averaging operators.

Differently from the literature, in this paper, we are interested in necessary and sufficient conditions for convergence, hence we focus on the three most popular fixed-point iterations applied on linear operators, that essentially are linear time-varying systems with special structure. The motivation is twofold. First, there are still several classes of multi-agent linear systems where convergence is the primary challenge, e.g. in distributed linear time-varying consensus dynamics with unknown graph connectivity (see Section VI-A). Second, fixed-point iterations applied on linear operators can provide non-convergence certificates for multi-agent dynamics that arise from distributed convex optimization and monotone game theory (Section VI-B).

Our main contribution is to show that the Krasnoselskij–Mann fixed-point iterations, possibly time-varying, applied on linear operators converge if and only if the associated matrix has certain spectral properties (Section III). To motivate and achieve our main result, we adopt an operator-theoretic perspective and characterize some regularity properties of linear mappings via eigenvalue location and properties, and linear matrix inequalities (Section IV). In Section VII, we conclude the paper and indicate one future research direction.

Notation: , and denote the set of real, non-negative real and complex numbers, respectively. denotes the disk of radius centered in , see Fig. 1 for some graphical examples. denotes a finite-dimensional Hilbert space with norm . is the set of positive definite symmetric matrices and, for , . denotes the identity operator. denotes the rotation operator. Given a mapping , denotes the set of fixed points, and the set of zeros. Given a matrix , denotes its kernel; and denote the spectrum and the spectral radius of , respectively. and denote vectors with elements all equal to and , respectively.

## Ii Mathematical definitions

### Ii-a Discrete-time linear systems

In this paper, we consider discrete-time linear time-invariant systems,

 x(k+1)=Ax(k), (1)

and linear time-varying systems with special structure, i.e.,

 x(k+1)=(1−αk)x(k)+αkAx(k), (2)

for some positive sequence . Note that for for all , the system in (2) reduces to that in (1).

### Ii-B System-theoretic definitions

We are interested in the following notion of global convergence, i.e., convergence of the state solution, independently on the initial condition, to some vector.

###### Definition 1 (Convergence)

The system in (2) is convergent if, for all , its solution converges to some , i.e., .

Note that in Definition 1, the vector can depend on the initial condition . In the linear time-invariant case, (1), it is known that semi-convergence holds if and only if the eigenvalues of the matrix are strictly inside the unit disk and the eigenvalue in , if present, must be semi-simple, as formalized next.

###### Definition 2 ((Semi-) Simple eigenvalue)

An eigenvalue is semi-simple if it has equal algebraic and geometric multiplicity. An eigenvalue is simple if it has algebraic and geometric multiplicities both equal to .

###### Lemma 1

The following statements are equivalent:

1. The system in (1) is convergent;

2. and the only eigenvalue on the unit disk is , which is semi-simple.

### Ii-C Operator-theoretic definitions

With the aim to study convergence of the dynamics in (1), (2), in this subsection, we introduce some key notions from operator theory in Hilbert spaces.

###### Definition 3 (Lipschitz continuity)

A mapping is -Lipschitz continuous in , with , if ,

###### Definition 4

In , an -Lipschitz continuous mapping is

• -Contractive (-CON) if ;

• NonExpansive (NE) if ;

• -Averaged (-AVG), with , if

 ∥T(x)−T(y)∥2≤∥x−y∥2−1−ηη∥(Id−T)(x)−(Id−T)(y)∥2, (3)

or, equivalently, if there exists a nonexpansive mapping and such that

 T=(1−η)Id+ηB.
• -strictly Pseudo-Contractive (-sPC), with , if

 ∥T(x)−T(y)∥2≤∥x−y∥2+κ∥(Id−T)(x)−(Id−T)(y)∥2. (4)

###### Definition 5

A mapping is:

• Contractive (CON) if there exist and a norm such that it is an -CON in ;

• Averaged (AVG) if there exist and a norm such that it is -AVG in ;

• strict Pseudo-Contractive (sPC) if there exists and a norm such that it is -sPC in .

## Iii Main results: Fixed-point iterations on linear mappings

In this section, we provide necessary and sufficient conditions for the convergence of some well-known fixed-point iterations applied on linear operators, i.e.,

 A:x↦Ax,with A∈Rn×n. (5)

First, we consider the Banach–Picard iteration [14, (1.69)] on a generic mapping , i.e., for all ,

 x(k+1)=T(x(k)), (6)

whose convergence is guaranteed if is averaged, see [14, Prop. 5.16]. The next statement shows that averagedness is also a necessary condition when the mapping is linear.

###### Proposition 1 (Banach–Picard iteration)

The following statements are equivalent:

1. in (5) is averaged;

2. the solution to the system

 x(k+1)=Ax(k) (7)

converges to some .

If the mapping is merely nonexpansive, then the sequence generated by the Banach–Picard iteration in (6) may fail to produce a fixed point of . For instance, this is the case for . In these cases, a relaxed iteration can be used, e.g. the Krasnoselskij–Mann iteration [14, Equ. (5.15)]. Specifically, let us distinguish the case with time-invariant step sizes, known as Krasnoselskij iteration [13, Chap. 3], and the case with time-varying, vanishing step sizes, known as Mann iteration [13, Chap. 4]. The former is defined by

 x(k+1)=(1−α)x(k)+αT(x(k)), (8)

for all , where is a constant step size.

The convergence of the discrete-time system in (8) to a fixed point of the mapping is guaranteed, for any arbitrary , if is nonexpansive [14, Th. 5.15], or if , defined from a compact, convex set to itself, is strictly pseudo-contractive and is sufficiently small [13, Theorem 3.5]. In the next statement, we show that if the mapping is linear, and is chosen small enough, then strict pseudo-contractiveness is necessary and sufficient for convergence.

###### Theorem 1 (Krasnoselskij iteration)

Let and . The following statements are equivalent:

1. in (5) is -strictly pseudo-contractive;

2. the solution to the system

 x(k+1)=(1−α)x(k)+αAx(k) (9)

converges to some .

In Theorem 1, the admissible step sizes for the Krasnoselskij iteration depend on the parameter that quantifies the strict pseudo-contractiveness of the mapping . When the parameter is unknown, or hard to quantify, one can adopt time-varying step sizes, e.g. the Mann iteration:

 x(k+1)=(1−αk)x(k)+αkT(x(k)), (10)

for all , where the step sizes shall be chosen as follows.

###### Assumption 1 (Mann sequence)

The sequence is such that for all , for some , and .

The convergence of (10) to a fixed point of the mapping is guaranteed if , defined from a compact, convex set to itself, is strictly pseudo-contractive [13, Theorem 3.5]. In the next statement, we show that if the mapping is linear, then strict pseudo-contractiveness is necessary and sufficient for convergence.

###### Theorem 2 (Mann iteration)

Let be a Mann sequence as in Assumption 1. The following statements are equivalent:

1. in (5) is strictly pseudocontractive;

2. the solution to

 x(k+1)=(1−αk)x(k)+αkAx(k) (11)

converges to some .

## Iv Operator-theoretic characterization of linear mappings

In this section, we characterize the operator-theoretic properties of linear mappings via necessary and sufficient linear matrix inequalities and conditions on the spectrum of the corresponding matrices. We exploit these technical results in Section V, to prove convergence of the fixed-point iterations presented in Section III.

###### Lemma 2 (Lipschitz continuous linear mapping)

Let and . The following statements are equivalent:

1. in (5) is -Lipschitz continuous in ;

2. .

It directly follows from Definition 3.

###### Lemma 3 (Linear contractive/nonexpansive mapping)

Let . The following statements are equivalent:

1. in (5) is an -contraction;

2. such that ;

3. the spectrum of is such that

 {Λ(A)⊂ℓD1∀λ∈Λ(A)∩bdr(ℓD1),  λ semi-simple (12)

If , the previous equivalent statements hold if and only if in (5) is nonexpansive.

The equivalence between (i) and (ii) follows from Lemma 2. By the Lyapunov theorem, (iii) holds if and only if the discrete-time linear system is (at least marginally) stable, i.e., and the eigenvalues of on the boundary of the disk, , are semi-simple. The last statement follows by noticing that an -contractive mapping is nonexpansive.

###### Lemma 4 (Linear averaged mapping)

Let . The following statements are equivalent:

1. in (5) is -averaged;

2. such that

 A⊤PA≼(2η−1)P+(1−η)(A⊤P+PA);
3. is nonexpansive;

4. the spectrum of is such that

 {Λ(A)⊂Dη∀λ∈Λ(A)∩bdr(Dη),  λ semi-simple. (13)

The equivalence (i) (ii) follows directly by inequality (3) in Definition 4. By [14, Prop. 4.35], is -AVG if and only if the linear mapping is NE, which proves (i) (iii). To conclude, we show that (iii) (iv). By Lemma 3, the linear mapping is NE if and only if

 {Λ(Aη)⊂D1∀λ∈Λ(Aη)∩bdr(D1),  λ semi-simple (14) ⇔ {Λ(A)⊂(1−η){1}+ηD1=Dη∀λ∈Λ(A)∩bdr(Dη),  λ semi-simple (15)

where the equivalence (14) (15) holds because

, and because the linear combination with the identity matrix does not alter the geometric multiplicity of the eigenvalues.

###### Lemma 5 (Linear strict pseudocontractive mapping)

Let . The following statements are equivalent:

1. in (5) is -strictly pseudocontractive;

2. such that

 (1−κ)A⊤PA≼(1+κ)P−κ(A⊤P+PA); (16)
3. is nonexpansive;

4. the spectrum of is such that

 ⎧⎪ ⎪⎨⎪ ⎪⎩Λ(A)⊂D11−κ∀λ∈Λ(A)∩bdr(D11−κ),  λ semi-simple (17)
5. is -averaged, with .

The equivalence (i) (ii) follows directly by inequality (4) in Definition (5). To prove that (ii) (iii), we note that the LMI in (16) can be recast as

 (κI+(1−κ)A)⊤P(κI+(1−κ)A)≼P, (18)

which, by Lemma 3, holds true if and only if the mapping is NE.
(iii) (iv): By Lemma 3, is NE if and only if

 {Λ(Asκ)⊂D1∀λ∈Λ(Asκ)∩bdr(D1),λ semi-simple (19) ⇔ ⎧⎪ ⎪⎨⎪ ⎪⎩Λ(A)⊂{−κ1−κ}+11−κD1=D11−κ∀λ∈Λ(A)∩bdr(D11−κ),  λ semi-simple (20)

where the equivalence (19) (20) holds because , and because the linear combination with the identity matrix does not alter the geometric multiplicity of the eigenvalues. (iii) (v): By Definition 4 and [14, Prop. 4.35], is NE if and only if is -AVG, for all . Since , , which concludes the proof.

## V Proofs of the main results

### Proof of Proposition 1 (Banach–Picard iteration)

We recall that, by Lemma 4, is AVG if and only if there exists such that and , is semi-simple and we notice that for all . Hence is averaged if and only if the eigenvalues of are strictly contained in the unit circle except for the eigenvalue in which, if present, is semi-simple. The latter is a necessary and sufficient condition for the convergence of , by Lemma 1.

### Proof of Theorem 1 (Krasnoselskij iteration)

(i) (ii): By Lemma 5, is -sPC if and only if is -AVG, with and ; therefore, if and only if is AVG with . By proposition (1), the latter is equivalent to the global convergence of the Banach–Picard iteration applied on , which corresponds to the Krasnoselskij iteration on , with .

### Proof of Theorem 2 (Mann iteration)

Proof that (i) (ii): Define the bounded sequence , for some to be chosen. Thus, . Since is sPC, we can choose small enough such that is NE, specifically, we shall choose . Note that . Since , we have that , hence , such that for all . Moreover, since , for all , we have that the solution is finite. Therefore, we can define for all , and for all . The proof then follows by applying [14, Th. 5.14 (iii)] to the Mann iteration .

Proof that (ii) (i): For the sake of contradiction, suppose that is not sPC, i.e., at least one of the following facts must hold: 1) has an eigenvalue in that is not semi-simple; 2) has a real eigenvalue greater than ; 3) has a pair of complex eigenvalues , with and . We show next that each of these three facts implies non-convergence of (10

). Without loss of generality (i.e., up to a linear transformation), we can assume that

is in Jordan normal form.

1) has an eigenvalue in that is not semi-simple. Due to (the bottom part of) the associated Jordan block, the vector dynamics in (10) contain the two-dimensional linear time-varying dynamics

 y(k+1) =((1−αk)[1001]+αk[1101])y(k) =[1αk01]y(k).

For , we have that the solution is such that and , which implies that . Thus, diverges and we have a contradiction.

2) Let has a real eigenvalue equal to . Again due to (the bottom part of) the associated Jordan block, the vector dynamics in (10) must contain the scalar dynamics . The solution then reads as . Now, since , it holds that , by Assumption 1. Therefore, and hence diverge, and we reach a contradiction.

3) has a pair of complex eigenvalues , with and . Due to the structure of the associated Jordan block, the vector dynamics in (10) contain the two-dimensional dynamics

 z(k+1) =((1−αk)[1001]+αk[σ−ωω−σ])z(k) =[1+ϵαk−ωαkωαk1+ϵαk]z(k).

Now, we define , and the angle such that and , i.e., . Then, we have that , hence, the solution reads as

 z(k+1)=(∏kh=0ρh)R(∑kh=0θh)z(0).

Since , if the product diverges, then and hence diverge as well. Thus, let us assume that the product converges. By the limit comparison test, the series converges (diverges) if and only the series converges (diverges). The latter diverges since . It follows that diverges, hence keeps rotating indefinitely, which is a contradiction.

## Vi Application to multi-agent linear systems

### Vi-a Consensus via time-varying Laplacian dynamics

We consider a connected graph of nodes, associated with agents seeking consensus, with Laplacian matrix . To solve the consensus problem, we study the following discrete-time linear time-varying dynamics:

 x(k+1) =x(k)−αkLx(k) (21a) =(1−αk)x(k)+αk(I−L)x(k), (21b)

where and, for simplicity, the state of each agent is a scalar variable, .

Since the dynamics in (21) have the structure of a Mann iteration, in view of Theorem 2, we have the following result.

###### Corollary 1

Let be a Mann sequence. The system in (21) asymptotically reaches consensus, i.e., the solution to (21) converges to , for some .

Since the graph is connected, has one (simple) eigenvalue at , and eigenvalues with strictly-positive real part. Therefore, the matrix in (21b) has one simple eigenvalue in and with real part strictly less than . By Lemma 5, is sPC and by Theorem 2, globally converges to some , i.e., . Since is a Laplacian matrix, implies consensus, i.e., , for some .

We emphasize that via (21), consensus is reached without assuming that the agents know the algebraic connectivity of the graph, i.e., the strictly-positive Fiedler eigenvalue of . We have only assumed that the agents agree on a sequence of vanishing, bounded, step sizes, . However, we envision that agent-dependent step sizes can be used as well, e.g. via matricial Mann iterations, see [13, §4.1].

Let us simulate the time-varying consensus dynamics in (21) for a graph with nodes, adjacency matrix with , , hence with Laplacian matrix

 L=Dout−A=[−1−1/2−1/2−01−1−10−1].

We note that has eigenvalues . Since we do not assume that the agents known about the connectivity of the graph, we simulate with step sizes that are initially larger than the maximum constant-step value for which convergence would hold. In Fig. 2, we compare the norm of the disagreement vectors, , obtained with two different Mann sequences, and , respectively. We observe that convergence with small tolerances is faster in the latter case with larger step sizes.

### Vi-B Two-player zero-sum linear-quadratic games: Non-convergence of projected pseudo-gradient dynamics

We consider two-player zero-sum games with linear-quadratic structure, i.e., we consider agents, with cost functions and , respectively, for some square matrix . In particular, we study discrete-time dynamics for solving the Nash equilibrium problem, that is the problem to find a pair such that:

 ⎧⎪ ⎪⎨⎪ ⎪⎩x∗1∈argminx1∈Rnf1(x1,x∗2)x∗2∈argminx2∈Rnf2(x∗1,x2).

A classic solution approach is the pseudo-gradient method, namely the discrete-time dynamics

 x(k+1) =x(k)−αkFx(k) (22a) =(1−αk)x(k)+αk(I−F)x(k), (22b)

where is the so-called pseudo-gradient mapping of the game, which in our case is defined as

 F(x):=[∇x1f1(x1,x2)∇x2f2(x1,x2)]=[−Cx2−Cx1]=[−01−10]⊗C=:Fx,

and is a sequence of vanishing step sizes, e.g. a Mann sequence. In our case, is a Nash equilibrium if and only if [19, Th. 1].

By Theorem 2, convergence of the system in (22) holds if and only if is strictly pseudocontractive. In the next statement, we show that this is not the case for in (22).

###### Corollary 2

Let be a Mann sequence and . The system in (22) does not globally converge.

It follows by Lemma 5 that the mapping is strictly pseudocontractive if and only if the eigenvalues of either have strictly-positive real part, or are semi-simple and equal to . Since , we have that the eigenvalues of are either with both positive and negative real part, or on the imaginary axis and not equal to , or equal to are not semi-simple. Therefore, is not strictly pseudocontractive and the proof follows by Theorem 2.

Let us numerically simulate the discrete-time system in (22), with the following parameters: , , , , and for all . Figure 3 shows persistent oscillations, due to the lack of strict pseudo-contractiveness of . In fact, . The example provides a non-convergence result: pseudo-gradient methods do not ensure global convergence in convex games with (non-strictly) monotone pseudo-gradient mapping, not even with vanishing step sizes and linear-quadratic structure.

## Vii Conclusion and outlook

Convergence in discrete-time linear systems can be equivalently characterized via operator-theoretic notions. Remarkably, the time-varying Mann iteration applied on linear mappings converges if and only if the considered linear operator is strictly pseudocontractive. This result implies that Laplacian-driven linear time-varying consensus dynamics with Mann step sizes do converge. It also implies that projected pseudo-gradient dynamics for Nash equilibrium seeking in monotone games do not necessarily converge.

Future research will focus on studying convergence of other, more general, linear fixed-point iterations and of discrete-time linear systems with uncertainty, e.g. polytopic.

## References

• [1] R. Olfati-Saber, A. Fax, and R. Murray, “Consensus and cooperation in networked multi-agent systems,” Proc. of the IEEE, vol. 95, no. 1, pp. 215–233, 2010.
• [2] A. Nedić, A. Ozdaglar, and P. Parrillo, “Constrained consensus and optimization in multi-agent networks,” IEEE Trans. on Automatic Control, vol. 55, no. 4, pp. 922–938, 2010.
• [3] P. Yi and L. Pavel, “A distributed primal-dual algorithm for computation of generalized Nash equilibria via operator splitting methods,” Proc. of the IEEE Conf. on Decision and Control, pp. 3841–3846, 2017.
• [4] F. Dörfler, J. Simpson-Porco, and F. Bullo, “Breaking the hierarchy: Distributed control and economic optimality in microgrids,” IEEE Trans. on Control of Network Systems, vol. 3, no. 3, pp. 241–253, 2016.
• [5] F. Dörfler and S. Grammatico, “Gather-and-broadcast frequency control in power systems,” Automatica, vol. 79, pp. 296–305, 2017.
• [6] A.-H. Mohsenian-Rad, V. Wong, J. Jatskevich, R. Schober, and A. Leon-Garcia, “Autonomous demand-side management based on game-theoretic energy consumption scheduling for the future smart grid,” IEEE Trans. on Smart Grid, vol. 1, no. 3, pp. 320–331, 2010.
• [7] R. Jaina and J. Walrand, “An efficient Nash-implementation mechanism for network resource allocation,” Automatica, vol. 46, pp. 1276 –1283, 2010.
• [8] J. Barrera and A. Garcia, “Dynamic incentives for congestion control,” IEEE Trans. on Automatic Control, vol. 60, no. 2, pp. 299–310, 2015.
• [9] J. Ghaderi and R. Srikant, “Opinion dynamics in social networks with stubborn agents: Equilibrium and convergence rate,” Automatica, vol. 50, pp. 3209– 3215, 2014.
• [10] S. R. Etesami and T. Başar, “Game-theoretic analysis of the hegselmann-krause model for opinion dynamics in finite dimensions,” IEEE Trans. on Automatic Control, vol. 60, no. 7, pp. 1886– 1897, 2015.
• [11] S. Martínez, F. Bullo, J. Cortés, and E. Frazzoli, “On synchronous robotic networks – Part i: Models, tasks, and complexity,” IEEE Trans. on Automatic Control, vol. 52, pp. 2199–2213, 2007.
• [12] M. Stanković, K. Johansson, and D. Stipanović, “Distributed seeking of Nash equilibria with applications to mobile sensor networks,” IEEE Trans. on Automatic Control, vol. 57, no. 4, pp. 904–919, 2012.
• [13] V. Berinde, Iterative Approximation of Fixed Points.   Springer, 2007.
• [14] H. H. Bauschke and P. L. Combettes, Convex analysis and monotone operator theory in Hilbert spaces.   Springer, 2010.
• [15] E. K. Ryu and S. Boyd, “A primer on monotone operator methods,” Appl. Comput. Math., vol. 15, no. 1, pp. 3–43, 2016.
• [16] S. Grammatico, F. Parise, M. Colombino, and J. Lygeros, “Decentralized convergence to Nash equilibria in constrained deterministic mean field control,” IEEE Trans. on Automatic Control, vol. 61, no. 11, pp. 3315–3329, 2016.
• [17] S. Grammatico, “Dynamic control of agents playing aggregative games with coupling constraints,” IEEE Trans. on Automatic Control, vol. 62, no. 9, pp. 4537 – 4548, 2017.
• [18] P. Giselsson and S. Boyd, “Linear convergence and metric selection for Douglas–Rachford splitting and ADMM,” IEEE Transactions on Automatic Control, vol. 62, no. 2, pp. 532–544, 2017.
• [19] G. Belgioioso and S. Grammatico, “Semi-decentralized Nash equilibrium seeking in aggregative games with coupling constraints and non-differentiable cost functions,” IEEE Control Systems Letters, vol. 1, no. 2, pp. 400–405, 2017.
• [20] S. Grammatico, “Proximal dynamics in multi-agent network games,” IEEE Trans. on Control of Network Systems, https://doi.org/10.1109/TCNS.2017.2754358, 2018.