I Introduction
This paper considers the optimization problem
(1) 
over a agent network. Each function is known only by the corresponding agent and assumed to be convex and differentiable. These agents form a connected network to solve the problem (1
) cooperatively without knowing other agents’ functions. The whole system is decentralized such that each agent has an estimation of the global variable
and can only exchange the estimation with their accessible neighbors at every iteration. We introduce(2) 
where each is a local estimation of the global variable and its th iterated value is . There is a symmetric mixing matrix encoding the communication between the agents. The minimum condition for
is that it has one eigenvalue
and all other eigenvalues are smaller than. In addition, the allone vector
is an eigenvector of
corresponding to the eigenvalue (this is satisfied when the sum of each row is ).Early decentralized methods based on decentralized gradient descent [1, 2, 3, 4, 5] have sublinear convergence for strongly convex objective functions, because of the diminishing stepsize that is needed to obtain a consensual and optimal solution. This sublinear convergence rate is much slower than that for centralized ones. The first decentralized algorithm with linear convergence [6] is based on Alternate Direction Multiplier Method (ADMM) [7, 8]. Note that this type of algorithms have rate for general convex functions [9, 10, 11]. After that, many linearly convergent algorithms are proposed. Some examples are EXTRA [12], NIDS [13], DIGing [14, 15], ESOM [16], gradient tracking methods [17, 18, 19, 15, 14, 20, 21], exact diffusion [22, 23], dual optimal [24, 25]. There are also works on composite functions, where each private function is the sum of a smooth and a nonsmooth functions [26, 13, 27, 28]. Another topic of interest is decentralized optimization over directed and dynamic graphs [29, 30, 31, 32, 14, 33, 34]. Interested reader can refer to [35] and the references therein for more algorithms.
This paper focuses on two linear convergent algorithms: EXTRA and NIDS, and provides better theoretical convergence results for them. EXact firsTordeR Algorithm (EXTRA) was proposed in [12], and its iteration is described in (5). There are conditions on the stepsize for its convergence. For the general convex case, where each is convex and smooth (i.e., has a Lipschitz continuous gradient), the condition in [12] is . Therefore, there is an implicit condition for that the smallest eigenvalue of is larger than . Later the condition is relaxed to in [36], and the corresponding requirement for is that the smallest eigenvalue of is larger than . In addition, this condition for the stepsize is shown to be optimal, i.e., EXTRA may diverge if the condition is not satisfied. Though we can always manipulate to change the smallest eigenvalues, the convergence speed of EXTRA depends on the matrix . In the numerical experiment, we will see that it is beneficial to choose small eigenvalues for EXTRA in certain scenarios.
The linear convergence of EXTRA requires additional conditions on the functions. There are mainly three types of conditions used in the literature: the strong convexity of (and some weaker variants) [12], the strong convexity of each (and some weaker variants) [36], and the strong convexity of one function [23]. Note that the condition on is much weaker than the other two; there are cases where is strongly convex but none of ’s is. E.g., for , where is the vector whose th component is and all other components are . If is (restricted) strongly convex with parameter , the linear convergence of EXTRA is shown when in [12]. The upper bound for the stepsize is very conservative, and the better performance with a larger stepsize was shown numerically in [12] without proof. If each is strongly convex with parameter , the linear convergence is shown when and in [27] and [36], respectively. One contribution of this paper to show the linear convergence of EXTRA under the condition of and .
The algorithm NIDS (Network InDepenment Stepsize) was proposed in [13]. Though there is a small difference from EXTRA, NIDS can choose a stepsize that does not depend on the mixing matrices. The convergence of NIDS is shown when . The result for linear convergence requires the strong convexity of . Another contribution of this paper is the linear convergence of NIDS under the (restricted) strong convexity of and relaxed mixing matrices with .
In sum, we provide new and stronger linear convergence results for both EXTRA and NIDS. More specifically,

We show the linear convergence of EXTRA with the strong convexity of and the relaxed condition . The upper bound of the stepsize can be as large as , which is shown to be optimal in [36] for general convex problems;

We show the linear convergence of NIDS with the same condition on and as EXTRA. But, the large networkindependent stepsize is kept.
Ia Notation
Since agent has its own estimation of the global variables , we put them together and define
(3) 
The gradient of is defined as
(4) 
We say that is consensual if i.e., , where and
In this paper, we use and to denote the Frobenious norm and the corresponding inner product, respectively. For a given matrix and any positive (semi)definite matrix , which is denoted as ( for positive semidefinite), we define The largest and the smallest eigenvalues of a matrix are defined as and . For a symmetric positive semidefinite matrix , we let be the smallest nonzero eigenvalue. is the pseudo inverse of . For a matrix we say a matrix is in if and is in if there exists such that For simplicity, we may use and to replace and , respectively, in the proofs.
Ii Algorithms and prerequisites
One iteration of EXTRA can be expressed as
(5)  
The stepsize , and the symmetric matrices and satisfy . The initial value is chosen arbitrarily, and . In practice, we usually let
One iteration of NIDS for solving (1) is
(6)  
where is the stepsize. The initial value is chosen arbitrarily, and .
If we choose in (5), the difference between EXTRA and NIDS happens only in the communicated data, i.e., whether we exchange the gradient information or not? However, this small difference brings big changes in the convergence [13]. In order for both algorithms to converge, we have the following assumptions on and .
Assumption 1 (Mixing matrix)
The connected network consists of a set of nodes and a set of undirected edges . An undirected edge means that there is a connection between agents and and both agents can exchange data. The mixing matrices and satisfy:

(Decentralized property) If and , then .

(Symmetry) , .

(Null space property)

(Spectral property)
Remark 1
Remark 2
From [12, Proposition 2.2], which is a critical assumption for both algorithms.
Before showing the theoretical results of EXTRA and NIDS, we reformulate both algorithms.
Reformulation of EXTRA: We reformulate EXTRA by introducing a variable as
(7a)  
(7b) 
Proposition 1
Proof:
Remark 3
By (7b) and the assumption of , each is in . In addition, for some .
Reformulation of NIDS: We adopt the following reformulation of NIDS from [13]:
(8a)  
(8b) 
with The equivalence is shown in [13].
To establish the linear convergence of EXTRA and NIDS, we need the following two assumptions.
Assumption 2 (Solution existence)
There is a unique solution for the consensus problem (1).
Assumption 3 (Lipschitz differentiability and (restricted) strong convexity)
Each component is a proper, closed and convex function with a Lipschitz continuous gradient:
(9) 
where is the Lipschitz constant. Furthermore, is (restricted) strongly convex with respect to :
(10) 
Proposition 2 ([12, Appendix A])
The following two statements are equivalent:

is (restricted) strongly convex with respect to

For any , is (restricted) strongly convex with respect to . Specially, we can let
Iii New Linear Convergence Results for EXTRA and NIDS
Iiia Linear Convergence of EXTRA
For simplicity, we introduce some notations. Because of part 4 of Assumption 1, given mixing matrices and there is a constant
such that
(13)  
(14)  
(15)  
(16) 
Based on (13), we have
(17) 
Let be a fixed point of (7), it is straightforward to show that satisfies
(18) 
Part 3 of Assumption 1 shows that is consensual, i.e., for certain . The iteration in (7b) and the initialization of show . Then we have . Thus, is the optimal solution to the problem (1).
Lemma 1 (Norm over range space [13, Lemma 3])
For any symmetric positive (semi)definite matrix with rank (), let be its eigenvalues. Then is a dimensional subspace in and has a norm defined by , where is the pseudo inverse of . In addition, for all .
For simplicity, we let and stand for and , respectively, in the proofs. The same simplification applies to .
Lemma 2 (Norm equality)
Let be the sequence generated by (7), then it satisfies
(19) 
Proof:
Lemma 3 (A key inequality for EXTRA)
Let be the sequence generated by (7), then we have
(21) 
Proof:
In the following theorem, we assume (i.e., ). It is easy to amend the proof to show the result for this special case.
Comments
There are no comments yet.