I Introduction
State convergence is the quintessential problem in multiagent systems. In fact, multiagent consensus and cooperation, distributed optimization and multiplayer game theory revolve around the convergence of the state variables to an equilibrium, typically unknown apriori. In distributed consensus problems, agents interact with their neighboring peers to collectively achieve global agreement on some value [1]. In distributed optimization, decision makers cooperate locally to agree on primaldual variables that solve a global optimization problem [2]. Similarly, in multiplayer games, selfish decision makers exchange local or semiglobal information to achieve an equilibrium for their interdependent optimization problems [3]. Applications of multiagent systems with guaranteed convergence are indeed vast, e.g. include power systems [4, 5], demand side management [6], network congestion control [7, 8], social networks [9, 10], robotic and sensor networks [11, 12].
From a general mathematical perspective, the convergence problem is a fixedpoint problem [13], or equivalently, a zerofinding problem [14]. For example, consensus in multiagent systems is equivalent to finding a collective state in the kernel of the Laplacian matrix, i.e., in operatortheoretic terms, to finding a zero of the Laplacian, seen as a linear operator.
Fixedpoint theory and monotone operator theory are then key to study convergence to multiagent equilibria [15]. For instance, Krasnoselskij–Mann fixedpoint iterations have been adopted in aggregative game theory [16, 17], monotone operator splitting methods in distributed convex optimization [18] and monotone game theory [19, 3, 20]. The main feature of the available results is that sufficient conditions on the problem data are typically proposed to ensure global convergence of fixedpoint iterations applied on nonlinear mappings, e.g. compositions of proximal or projection operators and linear averaging operators.
Differently from the literature, in this paper, we are interested in necessary and sufficient conditions for convergence, hence we focus on the three most popular fixedpoint iterations applied on linear operators, that essentially are linear timevarying systems with special structure. The motivation is twofold. First, there are still several classes of multiagent linear systems where convergence is the primary challenge, e.g. in distributed linear timevarying consensus dynamics with unknown graph connectivity (see Section VIA). Second, fixedpoint iterations applied on linear operators can provide nonconvergence certificates for multiagent dynamics that arise from distributed convex optimization and monotone game theory (Section VIB).
Our main contribution is to show that the Krasnoselskij–Mann fixedpoint iterations, possibly timevarying, applied on linear operators converge if and only if the associated matrix has certain spectral properties (Section III). To motivate and achieve our main result, we adopt an operatortheoretic perspective and characterize some regularity properties of linear mappings via eigenvalue location and properties, and linear matrix inequalities (Section IV). In Section VII, we conclude the paper and indicate one future research direction.
Notation: , and denote the set of real, nonnegative real and complex numbers, respectively. denotes the disk of radius centered in , see Fig. 1 for some graphical examples. denotes a finitedimensional Hilbert space with norm . is the set of positive definite symmetric matrices and, for , . denotes the identity operator. denotes the rotation operator. Given a mapping , denotes the set of fixed points, and the set of zeros. Given a matrix , denotes its kernel; and denote the spectrum and the spectral radius of , respectively. and denote vectors with elements all equal to and , respectively.
Ii Mathematical definitions
Iia Discretetime linear systems
IiB Systemtheoretic definitions
We are interested in the following notion of global convergence, i.e., convergence of the state solution, independently on the initial condition, to some vector.
Definition 1 (Convergence)
The system in (2) is convergent if, for all , its solution converges to some , i.e., .
Note that in Definition 1, the vector can depend on the initial condition . In the linear timeinvariant case, (1), it is known that semiconvergence holds if and only if the eigenvalues of the matrix are strictly inside the unit disk and the eigenvalue in , if present, must be semisimple, as formalized next.
Definition 2 ((Semi) Simple eigenvalue)
An eigenvalue is semisimple if it has equal algebraic and geometric multiplicity. An eigenvalue is simple if it has algebraic and geometric multiplicities both equal to .
Lemma 1
The following statements are equivalent:

The system in (1) is convergent;

and the only eigenvalue on the unit disk is , which is semisimple.
IiC Operatortheoretic definitions
With the aim to study convergence of the dynamics in (1), (2), in this subsection, we introduce some key notions from operator theory in Hilbert spaces.
Definition 3 (Lipschitz continuity)
A mapping is Lipschitz continuous in , with , if ,
Definition 4
In , an Lipschitz continuous mapping is

Contractive (CON) if ;

NonExpansive (NE) if ;

Averaged (AVG), with , if
(3) or, equivalently, if there exists a nonexpansive mapping and such that

strictly PseudoContractive (sPC), with , if
(4)
Definition 5
A mapping is:

Contractive (CON) if there exist and a norm such that it is an CON in ;

Averaged (AVG) if there exist and a norm such that it is AVG in ;

strict PseudoContractive (sPC) if there exists and a norm such that it is sPC in .
Iii Main results:
Fixedpoint iterations on linear mappings
In this section, we provide necessary and sufficient conditions for the convergence of some wellknown fixedpoint iterations applied on linear operators, i.e.,
(5) 
First, we consider the Banach–Picard iteration [14, (1.69)] on a generic mapping , i.e., for all ,
(6) 
whose convergence is guaranteed if is averaged, see [14, Prop. 5.16]. The next statement shows that averagedness is also a necessary condition when the mapping is linear.
Proposition 1 (Banach–Picard iteration)
The following statements are equivalent:

in (5) is averaged;

the solution to the system
(7) converges to some .
If the mapping is merely nonexpansive, then the sequence generated by the Banach–Picard iteration in (6) may fail to produce a fixed point of . For instance, this is the case for . In these cases, a relaxed iteration can be used, e.g. the Krasnoselskij–Mann iteration [14, Equ. (5.15)]. Specifically, let us distinguish the case with timeinvariant step sizes, known as Krasnoselskij iteration [13, Chap. 3], and the case with timevarying, vanishing step sizes, known as Mann iteration [13, Chap. 4]. The former is defined by
(8) 
for all , where is a constant step size.
The convergence of the discretetime system in (8) to a fixed point of the mapping is guaranteed, for any arbitrary , if is nonexpansive [14, Th. 5.15], or if , defined from a compact, convex set to itself, is strictly pseudocontractive and is sufficiently small [13, Theorem 3.5]. In the next statement, we show that if the mapping is linear, and is chosen small enough, then strict pseudocontractiveness is necessary and sufficient for convergence.
Theorem 1 (Krasnoselskij iteration)
Let and . The following statements are equivalent:

in (5) is strictly pseudocontractive;

the solution to the system
(9) converges to some .
In Theorem 1, the admissible step sizes for the Krasnoselskij iteration depend on the parameter that quantifies the strict pseudocontractiveness of the mapping . When the parameter is unknown, or hard to quantify, one can adopt timevarying step sizes, e.g. the Mann iteration:
(10) 
for all , where the step sizes shall be chosen as follows.
Assumption 1 (Mann sequence)
The sequence is such that for all , for some , and .
The convergence of (10) to a fixed point of the mapping is guaranteed if , defined from a compact, convex set to itself, is strictly pseudocontractive [13, Theorem 3.5]. In the next statement, we show that if the mapping is linear, then strict pseudocontractiveness is necessary and sufficient for convergence.
Iv Operatortheoretic characterization of linear mappings
In this section, we characterize the operatortheoretic properties of linear mappings via necessary and sufficient linear matrix inequalities and conditions on the spectrum of the corresponding matrices. We exploit these technical results in Section V, to prove convergence of the fixedpoint iterations presented in Section III.
Lemma 2 (Lipschitz continuous linear mapping)
Let and . The following statements are equivalent:

in (5) is Lipschitz continuous in ;

.
It directly follows from Definition 3.
Lemma 3 (Linear contractive/nonexpansive mapping)
The equivalence between (i) and (ii) follows from Lemma 2. By the Lyapunov theorem, (iii) holds if and only if the discretetime linear system is (at least marginally) stable, i.e., and the eigenvalues of on the boundary of the disk, , are semisimple. The last statement follows by noticing that an contractive mapping is nonexpansive.
Lemma 4 (Linear averaged mapping)
Let . The following statements are equivalent:

in (5) is averaged;

such that

is nonexpansive;

the spectrum of is such that
(13)
The equivalence (i) (ii) follows directly by inequality (3) in Definition 4. By [14, Prop. 4.35], is AVG if and only if the linear mapping is NE, which proves (i) (iii). To conclude, we show that (iii) (iv). By Lemma 3, the linear mapping is NE if and only if
(14)  
(15) 
where the equivalence (14) (15) holds because
, and because the linear combination with the identity matrix does not alter the geometric multiplicity of the eigenvalues.
Lemma 5 (Linear strict pseudocontractive mapping)
Let . The following statements are equivalent:

in (5) is strictly pseudocontractive;

such that
(16) 
is nonexpansive;

the spectrum of is such that
(17) 
is averaged, with .
The equivalence (i) (ii) follows directly by inequality (4) in Definition (5). To prove that (ii) (iii), we note that the LMI in (16) can be recast as
(18) 
which, by Lemma 3, holds true if and only if the mapping is NE.
(iii) (iv): By Lemma 3, is NE if and only if
(19)  
(20) 
where the equivalence (19) (20) holds because , and because the linear combination with the identity matrix does not alter the geometric multiplicity of the eigenvalues. (iii) (v): By Definition 4 and [14, Prop. 4.35], is NE if and only if is AVG, for all . Since , , which concludes the proof.
V Proofs of the main results
Proof of Proposition 1 (Banach–Picard iteration)
We recall that, by Lemma 4, is AVG if and only if there exists such that and , is semisimple and we notice that for all . Hence is averaged if and only if the eigenvalues of are strictly contained in the unit circle except for the eigenvalue in which, if present, is semisimple. The latter is a necessary and sufficient condition for the convergence of , by Lemma 1.
Proof of Theorem 1 (Krasnoselskij iteration)
Proof of Theorem 2 (Mann iteration)
Proof that (i) (ii): Define the bounded sequence , for some to be chosen. Thus, . Since is sPC, we can choose small enough such that is NE, specifically, we shall choose . Note that . Since , we have that , hence , such that for all . Moreover, since , for all , we have that the solution is finite. Therefore, we can define for all , and for all . The proof then follows by applying [14, Th. 5.14 (iii)] to the Mann iteration .
Proof that (ii) (i): For the sake of contradiction, suppose that is not sPC, i.e., at least one of the following facts must hold: 1) has an eigenvalue in that is not semisimple; 2) has a real eigenvalue greater than ; 3) has a pair of complex eigenvalues , with and . We show next that each of these three facts implies nonconvergence of (10
). Without loss of generality (i.e., up to a linear transformation), we can assume that
is in Jordan normal form.1) has an eigenvalue in that is not semisimple. Due to (the bottom part of) the associated Jordan block, the vector dynamics in (10) contain the twodimensional linear timevarying dynamics
For , we have that the solution is such that and , which implies that . Thus, diverges and we have a contradiction.
2) Let has a real eigenvalue equal to . Again due to (the bottom part of) the associated Jordan block, the vector dynamics in (10) must contain the scalar dynamics . The solution then reads as . Now, since , it holds that , by Assumption 1. Therefore, and hence diverge, and we reach a contradiction.
3) has a pair of complex eigenvalues , with and . Due to the structure of the associated Jordan block, the vector dynamics in (10) contain the twodimensional dynamics
Now, we define , and the angle such that and , i.e., . Then, we have that , hence, the solution reads as
Since , if the product diverges, then and hence diverge as well. Thus, let us assume that the product converges. By the limit comparison test, the series converges (diverges) if and only the series converges (diverges). The latter diverges since . It follows that diverges, hence keeps rotating indefinitely, which is a contradiction.
Vi Application to multiagent linear systems
Via Consensus via timevarying Laplacian dynamics
We consider a connected graph of nodes, associated with agents seeking consensus, with Laplacian matrix . To solve the consensus problem, we study the following discretetime linear timevarying dynamics:
(21a)  
(21b) 
where and, for simplicity, the state of each agent is a scalar variable, .
Since the dynamics in (21) have the structure of a Mann iteration, in view of Theorem 2, we have the following result.
Corollary 1
Since the graph is connected, has one (simple) eigenvalue at , and eigenvalues with strictlypositive real part. Therefore, the matrix in (21b) has one simple eigenvalue in and with real part strictly less than . By Lemma 5, is sPC and by Theorem 2, globally converges to some , i.e., . Since is a Laplacian matrix, implies consensus, i.e., , for some .
We emphasize that via (21), consensus is reached without assuming that the agents know the algebraic connectivity of the graph, i.e., the strictlypositive Fiedler eigenvalue of . We have only assumed that the agents agree on a sequence of vanishing, bounded, step sizes, . However, we envision that agentdependent step sizes can be used as well, e.g. via matricial Mann iterations, see [13, §4.1].
Let us simulate the timevarying consensus dynamics in (21) for a graph with nodes, adjacency matrix with , , hence with Laplacian matrix
We note that has eigenvalues . Since we do not assume that the agents known about the connectivity of the graph, we simulate with step sizes that are initially larger than the maximum constantstep value for which convergence would hold. In Fig. 2, we compare the norm of the disagreement vectors, , obtained with two different Mann sequences, and , respectively. We observe that convergence with small tolerances is faster in the latter case with larger step sizes.
ViB Twoplayer zerosum linearquadratic games:
Nonconvergence of projected pseudogradient dynamics
We consider twoplayer zerosum games with linearquadratic structure, i.e., we consider agents, with cost functions and , respectively, for some square matrix . In particular, we study discretetime dynamics for solving the Nash equilibrium problem, that is the problem to find a pair such that:
A classic solution approach is the pseudogradient method, namely the discretetime dynamics
(22a)  
(22b) 
where is the socalled pseudogradient mapping of the game, which in our case is defined as
and is a sequence of vanishing step sizes, e.g. a Mann sequence. In our case, is a Nash equilibrium if and only if [19, Th. 1].
By Theorem 2, convergence of the system in (22) holds if and only if is strictly pseudocontractive. In the next statement, we show that this is not the case for in (22).
Corollary 2
Let be a Mann sequence and . The system in (22) does not globally converge.
It follows by Lemma 5 that the mapping is strictly pseudocontractive if and only if the eigenvalues of either have strictlypositive real part, or are semisimple and equal to . Since , we have that the eigenvalues of are either with both positive and negative real part, or on the imaginary axis and not equal to , or equal to are not semisimple. Therefore, is not strictly pseudocontractive and the proof follows by Theorem 2.
Let us numerically simulate the discretetime system in (22), with the following parameters: , , , , and for all . Figure 3 shows persistent oscillations, due to the lack of strict pseudocontractiveness of . In fact, . The example provides a nonconvergence result: pseudogradient methods do not ensure global convergence in convex games with (nonstrictly) monotone pseudogradient mapping, not even with vanishing step sizes and linearquadratic structure.
Vii Conclusion and outlook
Convergence in discretetime linear systems can be equivalently characterized via operatortheoretic notions. Remarkably, the timevarying Mann iteration applied on linear mappings converges if and only if the considered linear operator is strictly pseudocontractive. This result implies that Laplaciandriven linear timevarying consensus dynamics with Mann step sizes do converge. It also implies that projected pseudogradient dynamics for Nash equilibrium seeking in monotone games do not necessarily converge.
Future research will focus on studying convergence of other, more general, linear fixedpoint iterations and of discretetime linear systems with uncertainty, e.g. polytopic.
References
 [1] R. OlfatiSaber, A. Fax, and R. Murray, “Consensus and cooperation in networked multiagent systems,” Proc. of the IEEE, vol. 95, no. 1, pp. 215–233, 2010.
 [2] A. Nedić, A. Ozdaglar, and P. Parrillo, “Constrained consensus and optimization in multiagent networks,” IEEE Trans. on Automatic Control, vol. 55, no. 4, pp. 922–938, 2010.
 [3] P. Yi and L. Pavel, “A distributed primaldual algorithm for computation of generalized Nash equilibria via operator splitting methods,” Proc. of the IEEE Conf. on Decision and Control, pp. 3841–3846, 2017.
 [4] F. Dörfler, J. SimpsonPorco, and F. Bullo, “Breaking the hierarchy: Distributed control and economic optimality in microgrids,” IEEE Trans. on Control of Network Systems, vol. 3, no. 3, pp. 241–253, 2016.
 [5] F. Dörfler and S. Grammatico, “Gatherandbroadcast frequency control in power systems,” Automatica, vol. 79, pp. 296–305, 2017.
 [6] A.H. MohsenianRad, V. Wong, J. Jatskevich, R. Schober, and A. LeonGarcia, “Autonomous demandside management based on gametheoretic energy consumption scheduling for the future smart grid,” IEEE Trans. on Smart Grid, vol. 1, no. 3, pp. 320–331, 2010.
 [7] R. Jaina and J. Walrand, “An efficient Nashimplementation mechanism for network resource allocation,” Automatica, vol. 46, pp. 1276 –1283, 2010.
 [8] J. Barrera and A. Garcia, “Dynamic incentives for congestion control,” IEEE Trans. on Automatic Control, vol. 60, no. 2, pp. 299–310, 2015.
 [9] J. Ghaderi and R. Srikant, “Opinion dynamics in social networks with stubborn agents: Equilibrium and convergence rate,” Automatica, vol. 50, pp. 3209– 3215, 2014.
 [10] S. R. Etesami and T. Başar, “Gametheoretic analysis of the hegselmannkrause model for opinion dynamics in finite dimensions,” IEEE Trans. on Automatic Control, vol. 60, no. 7, pp. 1886– 1897, 2015.
 [11] S. Martínez, F. Bullo, J. Cortés, and E. Frazzoli, “On synchronous robotic networks – Part i: Models, tasks, and complexity,” IEEE Trans. on Automatic Control, vol. 52, pp. 2199–2213, 2007.
 [12] M. Stanković, K. Johansson, and D. Stipanović, “Distributed seeking of Nash equilibria with applications to mobile sensor networks,” IEEE Trans. on Automatic Control, vol. 57, no. 4, pp. 904–919, 2012.
 [13] V. Berinde, Iterative Approximation of Fixed Points. Springer, 2007.
 [14] H. H. Bauschke and P. L. Combettes, Convex analysis and monotone operator theory in Hilbert spaces. Springer, 2010.
 [15] E. K. Ryu and S. Boyd, “A primer on monotone operator methods,” Appl. Comput. Math., vol. 15, no. 1, pp. 3–43, 2016.
 [16] S. Grammatico, F. Parise, M. Colombino, and J. Lygeros, “Decentralized convergence to Nash equilibria in constrained deterministic mean field control,” IEEE Trans. on Automatic Control, vol. 61, no. 11, pp. 3315–3329, 2016.
 [17] S. Grammatico, “Dynamic control of agents playing aggregative games with coupling constraints,” IEEE Trans. on Automatic Control, vol. 62, no. 9, pp. 4537 – 4548, 2017.
 [18] P. Giselsson and S. Boyd, “Linear convergence and metric selection for Douglas–Rachford splitting and ADMM,” IEEE Transactions on Automatic Control, vol. 62, no. 2, pp. 532–544, 2017.
 [19] G. Belgioioso and S. Grammatico, “Semidecentralized Nash equilibrium seeking in aggregative games with coupling constraints and nondifferentiable cost functions,” IEEE Control Systems Letters, vol. 1, no. 2, pp. 400–405, 2017.
 [20] S. Grammatico, “Proximal dynamics in multiagent network games,” IEEE Trans. on Control of Network Systems, https://doi.org/10.1109/TCNS.2017.2754358, 2018.