This paper considers the optimization problem
over a -agent network. Each function is known only by the corresponding agent and assumed to be convex and differentiable. These agents form a connected network to solve the problem (1
) cooperatively without knowing other agents’ functions. The whole system is decentralized such that each agent has an estimation of the global variableand can only exchange the estimation with their accessible neighbors at every iteration. We introduce
where each is a local estimation of the global variable and its th iterated value is . There is a symmetric mixing matrix encoding the communication between the agents. The minimum condition for
is that it has one eigenvalueand all other eigenvalues are smaller than
. In addition, the all-one vector
is an eigenvector ofcorresponding to the eigenvalue (this is satisfied when the sum of each row is ).
Early decentralized methods based on decentralized gradient descent [1, 2, 3, 4, 5] have sublinear convergence for strongly convex objective functions, because of the diminishing stepsize that is needed to obtain a consensual and optimal solution. This sublinear convergence rate is much slower than that for centralized ones. The first decentralized algorithm with linear convergence  is based on Alternate Direction Multiplier Method (ADMM) [7, 8]. Note that this type of algorithms have rate for general convex functions [9, 10, 11]. After that, many linearly convergent algorithms are proposed. Some examples are EXTRA , NIDS , DIGing [14, 15], ESOM , gradient tracking methods [17, 18, 19, 15, 14, 20, 21], exact diffusion [22, 23], dual optimal [24, 25]. There are also works on composite functions, where each private function is the sum of a smooth and a nonsmooth functions [26, 13, 27, 28]. Another topic of interest is decentralized optimization over directed and dynamic graphs [29, 30, 31, 32, 14, 33, 34]. Interested reader can refer to  and the references therein for more algorithms.
This paper focuses on two linear convergent algorithms: EXTRA and NIDS, and provides better theoretical convergence results for them. EXact firsT-ordeR Algorithm (EXTRA) was proposed in , and its iteration is described in (5). There are conditions on the stepsize for its convergence. For the general convex case, where each is convex and -smooth (i.e., has a -Lipschitz continuous gradient), the condition in  is . Therefore, there is an implicit condition for that the smallest eigenvalue of is larger than . Later the condition is relaxed to in , and the corresponding requirement for is that the smallest eigenvalue of is larger than . In addition, this condition for the stepsize is shown to be optimal, i.e., EXTRA may diverge if the condition is not satisfied. Though we can always manipulate to change the smallest eigenvalues, the convergence speed of EXTRA depends on the matrix . In the numerical experiment, we will see that it is beneficial to choose small eigenvalues for EXTRA in certain scenarios.
The linear convergence of EXTRA requires additional conditions on the functions. There are mainly three types of conditions used in the literature: the strong convexity of (and some weaker variants) , the strong convexity of each (and some weaker variants) , and the strong convexity of one function . Note that the condition on is much weaker than the other two; there are cases where is strongly convex but none of ’s is. E.g., for , where is the vector whose th component is and all other components are . If is (restricted) strongly convex with parameter , the linear convergence of EXTRA is shown when in . The upper bound for the stepsize is very conservative, and the better performance with a larger stepsize was shown numerically in  without proof. If each is strongly convex with parameter , the linear convergence is shown when and in  and , respectively. One contribution of this paper to show the linear convergence of EXTRA under the condition of and .
The algorithm NIDS (Network InDepenment Stepsize) was proposed in . Though there is a small difference from EXTRA, NIDS can choose a stepsize that does not depend on the mixing matrices. The convergence of NIDS is shown when . The result for linear convergence requires the strong convexity of . Another contribution of this paper is the linear convergence of NIDS under the (restricted) strong convexity of and relaxed mixing matrices with .
In sum, we provide new and stronger linear convergence results for both EXTRA and NIDS. More specifically,
We show the linear convergence of EXTRA with the strong convexity of and the relaxed condition . The upper bound of the stepsize can be as large as , which is shown to be optimal in  for general convex problems;
We show the linear convergence of NIDS with the same condition on and as EXTRA. But, the large network-independent stepsize is kept.
Since agent has its own estimation of the global variables , we put them together and define
The gradient of is defined as
We say that is consensual if i.e., , where and
In this paper, we use and to denote the Frobenious norm and the corresponding inner product, respectively. For a given matrix and any positive (semi)definite matrix , which is denoted as ( for positive semidefinite), we define The largest and the smallest eigenvalues of a matrix are defined as and . For a symmetric positive semidefinite matrix , we let be the smallest nonzero eigenvalue. is the pseudo inverse of . For a matrix we say a matrix is in if and is in if there exists such that For simplicity, we may use and to replace and , respectively, in the proofs.
Ii Algorithms and prerequisites
One iteration of EXTRA can be expressed as
The stepsize , and the symmetric matrices and satisfy . The initial value is chosen arbitrarily, and . In practice, we usually let
One iteration of NIDS for solving (1) is
where is the stepsize. The initial value is chosen arbitrarily, and .
If we choose in (5), the difference between EXTRA and NIDS happens only in the communicated data, i.e., whether we exchange the gradient information or not? However, this small difference brings big changes in the convergence . In order for both algorithms to converge, we have the following assumptions on and .
Assumption 1 (Mixing matrix)
The connected network consists of a set of nodes and a set of undirected edges . An undirected edge means that there is a connection between agents and and both agents can exchange data. The mixing matrices and satisfy:
(Decentralized property) If and , then .
(Symmetry) , .
(Null space property)
From [12, Proposition 2.2], which is a critical assumption for both algorithms.
Before showing the theoretical results of EXTRA and NIDS, we reformulate both algorithms.
Reformulation of EXTRA: We reformulate EXTRA by introducing a variable as
By (7b) and the assumption of , each is in . In addition, for some .
Reformulation of NIDS: We adopt the following reformulation of NIDS from :
with The equivalence is shown in .
To establish the linear convergence of EXTRA and NIDS, we need the following two assumptions.
Assumption 2 (Solution existence)
There is a unique solution for the consensus problem (1).
Assumption 3 (Lipschitz differentiability and (restricted) strong convexity)
Each component is a proper, closed and convex function with a Lipschitz continuous gradient:
where is the Lipschitz constant. Furthermore, is (restricted) strongly convex with respect to :
Proposition 2 ([12, Appendix A])
The following two statements are equivalent:
is (restricted) strongly convex with respect to
For any , is (restricted) strongly convex with respect to . Specially, we can let
Iii New Linear Convergence Results for EXTRA and NIDS
Iii-a Linear Convergence of EXTRA
Based on (13), we have
Let be a fixed point of (7), it is straightforward to show that satisfies
Lemma 1 (Norm over range space [13, Lemma 3])
For any symmetric positive (semi)definite matrix with rank (), let be its eigenvalues. Then is a -dimensional subspace in and has a norm defined by , where is the pseudo inverse of . In addition, for all .
For simplicity, we let and stand for and , respectively, in the proofs. The same simplification applies to .
Lemma 2 (Norm equality)
Let be the sequence generated by (7), then it satisfies
Lemma 3 (A key inequality for EXTRA)
Let be the sequence generated by (7), then we have
In the following theorem, we assume (i.e., ). It is easy to amend the proof to show the result for this special case.
Theorem 1 (Q-linear convergence of EXTRA)
Then we find an upper bound of as