1 Introduction
In this work, we consider the following linearly constrained optimization problem:
(1) 
where is a smooth function (possibly nonconvex); is not full column rank;
is a known vector.
An important application of problem (1
) is in the nonconvex distributed optimization and learning – a problem that has gained considerable attention recently, and has found applications in training neural networks
[1], distributed information processing and machine learning
[2, 3], and distributed signal processing [4]. In distributed optimization and learning, the common setup is that a network consists of distributed agents collectively optimize the following problem(2) 
where is a function local to agent (note, for notational simplicity we assume that is a scalar); represents some smooth regularization function known to all agents. Below we present two problem formulations based on different topologies and application scenarios.
Scenario 1: The Global Consensus. Suppose that all the agents are connected to a single central node. The distributed agents can communicate with the controller, but they are not able to directly communicate among themselves. In this case problem (2) can be equivalently formulated into the following global consensus problem [5, 3]
(3) 
The setting of the above global consensus problem is popular in applications such as parallel computing, in which the existence of central controller can orchestrate the activity of all agents; see [6, 7]. To cast the problem into the form of (1), define
(4) 
where
is the identity matrix;
is the all one vector.Scenario 2: Distributed Optimization Over Networks. Suppose that there is no central controller, and the agents are connected by a network defined by an undirected graph , with vertices and edges. Each agent can only communicate with its immediate neighbors, and it can access one component function . This problem has wide applications ranging from distributed communication networking [8], distributed and parallel machine learning [2, 9, 10], to distributed signal processing [11].
Define the nodeedge incidence matrix as following: if and it connects vertex and with , then if , if and otherwise. Introduce local variables , and suppose the graph is connected. Then as long as as the graph is connected, the following formulation is equivalent to the global consensus problem, which is precisely problem (1)
(5) 
1.1 The objective of this work
The research question we attempt to address in this work is:
rgb]0.9,0.9,0.9
(Q) Can we design primaldual algorithms capable of computing secondorder stationary solutions for (1)?
Let us first analyze the firstorder stationary (ss1) and secondorder stationary (ss2) solutions for problem (1). For a general smooth nonlinear problem in the following form
(6) 
the firstorder necessary condition is given as
(7) 
The secondorder necessary condition is given below [see Proposition 3.1.1 in [12]]. Suppose is regular, then
(8) 
Applying the above result to our problem, we obtain the following first and secondorder necessary condition for problem (1) ^{1}^{1}1Note that for linear constraints no further regularity is needed for the existence of multipliers
(9a)  
(9b) 
In other words, the secondorder necessary condition is equivalent to the condition that is positive semidefinite in the null space of . Similarly, the sufficient condition for strict local minimizer is given by
(10)  
To proceed, we need the following claim [see Lemma 3.2.1 in [12]]
Claim 1.1
Let and be two symmetric matrices. Assume that is positive semidefinite and is positive definite on the null space of , that is, for all with . Then there exists a scalar such that
(11) 
Conversely, if there exists a scalar such that (11) is true, then we have for all with .
It is worth mentioning that checking both of the above sufficient and necessary conditions can be done in polynomial time, but when there are inequality constraints, checking secondorder conditions can be NPhard; see [13]. In the following we will refer to the condition (9a) as ss1 solution and condition (9b) as the ss2 solution. According to the above definition, we define a strict saddle point to be the solution such that
(14) 
It is easy to verify using Claim 1.1 that the above condition implies that for the same , the following is true
(15) 
where
denotes the smallest eigenvalue of a matrix. Clearly, if a ss1 solution
does not satisfy (14), i.e.,(16) 
then (9b) is true. In this work, we will develop primaldual algorithms that avoid converging to the strict saddles (14).
1.2 Existing literature
Many recent works have been focused on designing algorithms with convergence guarantees to local minimum points/ss2 for nonconvex unconstrained problems. These include secondorder methods such as trust region method [14], cubic regularized Newton’s method [15], and a hybrid of firstorder and secondorder methods [16]. When only gradient information is available, it has been shown that with random initialization, gradient descent (GD) converges to ss2 for unconstrained smooth problems with probability one [17]. Recently, a perturbed version of GD which occasionally adds noise to the iterates has been proposed [18], and such a method converges to the ss2 with faster convergence rate than the ordinary gradient descent algorithm with random initialization. When manifold constraints are present, it is shown in [19] that manifold gradient descent converges to ss2, provided that each time the iterates are always feasible (ensured by performing a potentially expensive secondorder retraction operation). However, there has been no work analyzing whether classical primaldual gradient type methods based on Lagrangian relaxation are also capable of computing ss2.
The consensus problem (2) and (5) have been studied extensively in the literature when the objective functions are all convex; see for example [20, 21, 22, 23]. Primal methods such as distributed subgradient method [20], the EXTRA method [22], as well as primaldual based methods such as Alternating Direction Method of Multipliers (ADMM) [5, 24, 25] have been studied. On the contrary, only recently there have been some work addressing the more challenging problems without assuming convexity of ’s; see recent developments in [26, 3, 4, 27]. In particular, reference [3] develops nonconvex ADMM based methods (with global sublinear convergence rate) for solving the global consensus problem (3). Reference [27] proposes a primaldual based method for unconstrained nonconvex distributed optimization over a connected network (without a central controller), and derives the first global convergence rate for distributed nonconvex optimization. In [4] the authors utilize certain gradient tracking idea to solve a constrained nonsmooth distributed problem over possibly timevarying networks. It is worth noting that the distributed algorithms proposed in all these works converge to ss1. There has been no distributed schemes that can provably converge to ss2 for smooth nonconvex problem in the form of (2).
2 The Gradient PrimalDual Algorithm
In this section, we introduce the gradient primaldual algorithm (GPDA) for solving the nonconvex problem (1). Let us introduce the augmented Lagrangian (AL) as
(17) 
where is the dual variable. The steps of the GPDA algorithm are described in the table below.
Each iteration of the GPDA performs a gradient descent step on the AL (with stepsize being ), followed by taking one step of approximate dual gradient ascent (with stepsize ). The GPDA is closely related to the classical Uzawa primaldual method [28], which has been utilized to solve convex saddle point problems and linearly constrained convex problems [29]. It is also related to the proximal method of multipliers (ProxMM) first developed by Rockafellar in [30]
, in which a proximal term has been added to the augmented Lagrangian in order to make it strongly convex in each iteration. The latter method has also been applied for example, in solving certain largescale linear programs; see
[31]. However the theoretical results derived for ProxMM in [30, 31] are only developed for convex problems. Further, such an algorithm requires that the proximal Lagrangian to be optimized with increasing accuracy as the algorithm progresses. Finally, we note that both step (18a) and (18b) can be decomposable over the variables, therefore they are easy to be implemented in a distributed manner (as will be explained shortly).Algorithm 1. The gradient primaldual algorithm At iteration , initialize and . At each iteration , update variables by:
2.1 Application in distributed optimization problem
To see how the GPDA can be specialized to the problem of distributed optimization over the network (5), let us begin by writing the optimality condition of (18a). We have
(19) 
Subtracting (19) with its counterpart at iteration , we obtain
where we have defined . Rearranging, and use the fact that is the signed Laplacian matrix, and in (5), we obtain
(20) 
Consider problem (5) (for simplicity assume that ), the above iteration can be implemented in a distributed manner, where each agent performs
where is the set of neighbors of node ; is the degree for node . Clearly, to implement this iteration each node only needs to know the information from the past two iterations about its immediate neighbors.
2.2 Convergence to ss1 solutions
We first state our main assumptions.

The function is smooth and has Lipschitz continuous gradient, as well as Lipschitz continuous Hessian:
(21) (22) 
The function is lower bounded over . Without loss of generality, assume that .

The constraint is feasible over . Further, is not full rank.

The function is coercive.

The function is proper and it satisfies the KurdykaŁojasiewicz (KŁ) property. That is, at if there exist , a neighborhood of and a continuous concave function such that: 1) and is continuously differentiable on with positive derivatives; 2) for all , satisfying , it holds that
(23) where is the limiting subdifferential defined as
We comment that a wide class of functions enjoys the KŁ property, for example a semialgebraic function is a KL function; for detailed discussions of the KŁ property we refer the readers to [32, 33].
Below we will use , , and to denote the th, the maximum, the minimum, and the smallest nonzero eigenvalues of a matrix, respectively.
The convergence of GPDA to the ss1 is similar to Theorem 3.1 in [34] and Corollay 4.1 in [34]. Algorithmically, the main difference is that the algorithms analyzed in [34] do not linearize the penalty term , and they make use of the same penalty and proximal parameters, that is, . In this work, in order to show the convergence to ss2, we need to have the freedom of tuning while fixing , therefore and have to be chosen differently. However, in terms of analysis, there is no major difference between these versions. For completeness, we only outline the key proof steps in the Appendix.
Claim 2.1
Suppose Assumptions [A1] – [A5] are satisfied. For appropriate choices of , and satisfying (67) given in the appendix, and starting from any feasible point , the GPDA converges to the set of ss1 solutions.
Further, if is a function, then converges globally to a unique point .
2.3 Convergence to ss2
One can view Claim 2.1 as some variation of known results. On the contrary, in this section we show one of the main contributions of this work, which demonstrates that GPDA can converge to solutions beyond the ss1.
To this end, first let us rewrite the update step using its firstorder optimality condition as follows
Therefore the iteration can be written as
The compact way to write the above iteration is
(24) 
where denotes the by identity matrix denotes the by
all zero matrix.
Next let us consider approximating near a firstorder stationary solution . Let us define
Claim 2.1 implies that when are chosen appropriately, then . Therefore for any given there exists an iteration index such that the following holds
(25) 
Next let us approximate the gradients around :
(26) 
where in the last inequality we have defined
(27) 
From Assumption [A1] and (25) we have
Therefore we have
(28) 
Using the approximation (2.3), we obtain
(29) 
Plugging (29) into (2.3), the iteration (2.3) can be written as
(30) 
Then the above iteration can be compactly written as
(31) 
for some appropriately defined vectors and matrices which are given below
(32)  
(33) 
It is clear that is a bounded sequence. As a direct result of Claim 2.1, we can show that every fixed point of the above iteration is an ss1 solution for problem (1).
Corollary 2.1
To proceed, we analyze the dynamics of the system (31). The following claim is a key result that characterizes the eigenvalues for the matrix . We refer the readers to the appendix for detailed proof.
Claim 2.2
Theorem 2.1
Suppose that Assumptions [A1]–[A5] hold true, and that the following parameters are chosen
(34) 
Suppose that are initialized randomly. Then with probability one, the iterates generated by the GPDA converges to an ss2 solution (9).
Proof. We utilize the stable manifold theorem [35, 36]. We will verify the conditions given in Theorem 7 [36] to show that the system (31) is not stable around strict saddle points.
Step 1. We will show that the mapping defined in (2.1) is diffeomorphism.
First, suppose there exists , such that . Using the definition of , and the fact that the matrix is invertible, we obtain . Using the above two results, we obtain
Then we have
This implies that
Suppose that the following is true
(35) 
Then we have , implying . This says that the mapping is injective.
To show that the mapping is surjective, we see that for a given tuple , the iterate is given by
where is some function of . It is clear that is the unique solution to the following convex problem [with satisfying (35)]
Additionally, using the definition of the mapping in (2.1), we have that the Jacobian matrix for the mapping is given by
(36) 
Then it has been shown in Claim 2.2 that as long as the following is true
(37) 
the Jacobian matrix is invertible. By applying the inverse function theorem, is continuously differentiable.
Step 2. We can show that at a strict saddle point , for the Jacobian matrix evaluated at
, the span of the eigenvectors corresponding to the eigenvalues of magnitude less than or equal to 1 is not the full space. This is easily done since according to Claim
2.2, has one eigenvalue that is strictly greater than 1.Step 3. Combining the previous two steps, and by utilizing Theorem 7 [36], we conclude that with random initialization, the GPDA converges to the secondorder stationary solutions with probability one. Q.E.D.
3 The Gradient ADMM Algorithm
In this section, we extend the argument in the previous section to an algorithm belonging to the class of method called alternating direction method of multipliers (ADMM). Although the main idea of the analysis extends those in the previous section, the presence of two blocks of primal variables instead of one significantly complicates the analysis.
Consider the following problem
(38) 
where , and ; . Clearly the global consensus problem (3) can be formulated into the above twoblock problem, with the following identification: , , , , , , .
For this problem, the first and secondorder necessary conditions are given by [cf. (9)]
(39)  
Similarly as before, we will refer to solutions satisfy the first line as ss1 solutions, and those that satisfy both as ss2 solutions. Therefore, a strict saddle point is defined as a point that satisfies the following conditions
(40) 
Define the AL function as
The gradient ADMM (GADMM) algorithm that we propose is given below.
Algorithm 2. The gradient ADMM At iteration , initialize and . At each iteration , update variables by:
We note that in the GADMM, the and steps perform gradient steps to optimize the AL, instead of performing the exact minimization as the original convex version of ADMM does [5, 37]. The reason is that the direct minimization may not be possible because the nonconvexity of and makes the subproblem of minimizing the AL w.r.t. and also nonconvex. Note that the gradient steps have been used in the primal updates of ADMM when dealing with convex problems, see [38], but their analyses do not extend to the nonconvex setting.
It is also worth noting that the key difference between Algorithm 2 and 1 is that, in the update step (41b) of Algorithm 2, the newly updated is used. If in this step is used instead of , then Algorithm 2 is equivalent to Algorithm 1. Also there are quite a few recent works applying ADMMtype method to solve a number of nonconvex problems; see, e.g., [39, 40, 41] and the references therein. However, to the best of our knowledge, these algorithms do not take exactly the same form as Algorithm 2 described above, despite the fact that their analyses all appear to be quite similar (i.e., some potential function based on the AL is shown to be descending at each iteration of the algorithm). In particular, in [41], both the and subproblems are solved using a proximal point method; In [42], the step is solved using the gradient step, while the step is solved using the conventional exact minimization. Of course, none of these works analyzed the convergence of these methods to ss2 solutions.
3.1 Application in global consensus problem
We discuss how Algorithm 2 can be applied to solve the global consensus (3). For this problem, the distributed nodes and the master node alternate between their updates:
Clearly, for fixed , the distributed nodes are able to perform their computation completely in parallel.
3.2 Convergence to firstorder stationary solutions
First we make the following assumptions.

The function and are smooth and both have Lipschitz continuous gradient and Hessian, with constants , , and .

and are lower bounded over . Without loss of generality, assume .

is feasible over and ; the matrix is not full rank.

is a coercive function.

is a (KŁ) function given in [A5].
Based on the above assumptions, the convergence of Algorithm 2 to the ss1 solutions can be shown following similar line of arguments as in [39, 40, 41, 42]. However, since the exact form of this algorithm has not appeared before, for completeness we provide the proof outline in the appendix.
Claim 3.1
Suppose Assumptions [B1] – [B5] are satisfied. For appropriate choices of [see (82) in the Appendix for the precise expression], and starting from any point , Algorithm 2 converges to the set of ss1 points. Further, if is a KŁ function, then Algorithm 2 converges globally to a unique point .
3.3 Convergence to ss2 solutions
The optimality conditions for the update is given as
These conditions combined with the update rule of the dual variable give the following compact form of the algorithm
To compactly write the iterations in the form of a linear dynamic system, define
Next we approximate the iteration around a stationary solution . Suppose that and . Then similarly as the derivation of (2.3), we can write
where we have defined
(43a)  
(43b)  
(43c) 
with the following
Comments
There are no comments yet.