Gradient Primal-Dual Algorithm Converges to Second-Order Stationary Solutions for Nonconvex Distributed Optimization

In this work, we study two first-order primal-dual based algorithms, the Gradient Primal-Dual Algorithm (GPDA) and the Gradient Alternating Direction Method of Multipliers (GADMM), for solving a class of linearly constrained non-convex optimization problems. We show that with random initialization of the primal and dual variables, both algorithms are able to compute second-order stationary solutions (ss2) with probability one. This is the first result showing that primal-dual algorithm is capable of finding ss2 when only using first-order information, it also extends the existing results for first-order, but primal-only algorithms. An important implication of our result is that it also gives rise to the first global convergence result to the ss2, for two classes of unconstrained distributed non-convex learning problems over multi-agent networks.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

11/27/2012

Duality between subgradient and conditional gradient methods

Given a convex optimization problem and its dual, there are many possibl...
10/23/2019

Accelerated Primal-Dual Algorithms for Distributed Smooth Convex Optimization over Networks

This paper proposes a novel family of primal-dual-based distributed algo...
07/19/2021

Revisiting the Primal-Dual Method of Multipliers for Optimisation over Centralised Networks

The primal-dual method of multipliers (PDMM) was originally designed for...
12/12/2019

A Distributed Quasi-Newton Algorithm for Primal and Dual Regularized Empirical Risk Minimization

We propose a communication- and computation-efficient distributed optimi...
04/24/2020

Primal and Dual Prediction-Correction Methods for Time-Varying Convex Optimization

We propose a unified framework for time-varying convex optimization base...
03/17/2019

Linearly Constrained Smoothing Group Sparsity Solvers in Off-grid Model

In compressed sensing, the sensing matrix is assumed perfectly known. Ho...
12/18/2019

Primal-dual optimization methods for large-scale and distributed data analytics

The augmented Lagrangian method (ALM) is a classical optimization tool t...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In this work, we consider the following linearly constrained optimization problem:

(1)

where is a smooth function (possibly non-convex); is not full column rank;

is a known vector.

An important application of problem (1

) is in the non-convex distributed optimization and learning – a problem that has gained considerable attention recently, and has found applications in training neural networks

[1]

, distributed information processing and machine learning

[2, 3], and distributed signal processing [4]. In distributed optimization and learning, the common setup is that a network consists of distributed agents collectively optimize the following problem

(2)

where is a function local to agent (note, for notational simplicity we assume that is a scalar); represents some smooth regularization function known to all agents. Below we present two problem formulations based on different topologies and application scenarios.

Scenario 1: The Global Consensus. Suppose that all the agents are connected to a single central node. The distributed agents can communicate with the controller, but they are not able to directly communicate among themselves. In this case problem (2) can be equivalently formulated into the following global consensus problem [5, 3]

(3)

The setting of the above global consensus problem is popular in applications such as parallel computing, in which the existence of central controller can orchestrate the activity of all agents; see [6, 7]. To cast the problem into the form of (1), define

(4)

where

is the identity matrix;

is the all one vector.

Scenario 2: Distributed Optimization Over Networks. Suppose that there is no central controller, and the agents are connected by a network defined by an undirected graph , with vertices and edges. Each agent can only communicate with its immediate neighbors, and it can access one component function . This problem has wide applications ranging from distributed communication networking [8], distributed and parallel machine learning [2, 9, 10], to distributed signal processing [11].

Define the node-edge incidence matrix as following: if and it connects vertex and with , then if , if and otherwise. Introduce local variables , and suppose the graph is connected. Then as long as as the graph is connected, the following formulation is equivalent to the global consensus problem, which is precisely problem (1)

(5)

1.1 The objective of this work

The research question we attempt to address in this work is:

rgb]0.9,0.9,0.9

(Q)  Can we design primal-dual algorithms capable of computing second-order stationary solutions for (1)?

Let us first analyze the first-order stationary (ss1) and second-order stationary (ss2) solutions for problem (1). For a general smooth nonlinear problem in the following form

(6)

the first-order necessary condition is given as

(7)

The second-order necessary condition is given below [see Proposition 3.1.1 in [12]]. Suppose is regular, then

(8)

Applying the above result to our problem, we obtain the following first- and second-order necessary condition for problem (1) 111Note that for linear constraints no further regularity is needed for the existence of multipliers

(9a)
(9b)

In other words, the second-order necessary condition is equivalent to the condition that is positive semi-definite in the null space of . Similarly, the sufficient condition for strict local minimizer is given by

(10)

To proceed, we need the following claim [see Lemma 3.2.1 in [12]]

Claim 1.1

Let and be two symmetric matrices. Assume that is positive semidefinite and is positive definite on the null space of , that is, for all with . Then there exists a scalar such that

(11)

Conversely, if there exists a scalar such that (11) is true, then we have for all with .

By Claim 1.1, the sufficient condition (10) can be equivalently written as:

(12)
(13)

It is worth mentioning that checking both of the above sufficient and necessary conditions can be done in polynomial time, but when there are inequality constraints, checking second-order conditions can be NP-hard; see [13]. In the following we will refer to the condition (9a) as ss1 solution and condition (9b) as the ss2 solution. According to the above definition, we define a strict saddle point to be the solution such that

(14)

It is easy to verify using Claim 1.1 that the above condition implies that for the same , the following is true

(15)

where

denotes the smallest eigenvalue of a matrix. Clearly, if a ss1 solution

does not satisfy (14), i.e.,

(16)

then (9b) is true. In this work, we will develop primal-dual algorithms that avoid converging to the strict saddles (14).

1.2 Existing literature

Many recent works have been focused on designing algorithms with convergence guarantees to local minimum points/ss2 for non-convex unconstrained problems. These include second-order methods such as trust region method [14], cubic regularized Newton’s method [15], and a hybrid of first-order and second-order methods [16]. When only gradient information is available, it has been shown that with random initialization, gradient descent (GD) converges to ss2 for unconstrained smooth problems with probability one [17]. Recently, a perturbed version of GD which occasionally adds noise to the iterates has been proposed [18], and such a method converges to the ss2 with faster convergence rate than the ordinary gradient descent algorithm with random initialization. When manifold constraints are present, it is shown in [19] that manifold gradient descent converges to ss2, provided that each time the iterates are always feasible (ensured by performing a potentially expensive second-order retraction operation). However, there has been no work analyzing whether classical primal-dual gradient type methods based on Lagrangian relaxation are also capable of computing ss2.

The consensus problem (2) and (5) have been studied extensively in the literature when the objective functions are all convex; see for example [20, 21, 22, 23]. Primal methods such as distributed subgradient method [20], the EXTRA method [22], as well as primal-dual based methods such as Alternating Direction Method of Multipliers (ADMM) [5, 24, 25] have been studied. On the contrary, only recently there have been some work addressing the more challenging problems without assuming convexity of ’s; see recent developments in [26, 3, 4, 27]. In particular, reference [3] develops non-convex ADMM based methods (with global sublinear convergence rate) for solving the global consensus problem (3). Reference [27] proposes a primal-dual based method for unconstrained non-convex distributed optimization over a connected network (without a central controller), and derives the first global convergence rate for distributed non-convex optimization. In [4] the authors utilize certain gradient tracking idea to solve a constrained nonsmooth distributed problem over possibly time-varying networks. It is worth noting that the distributed algorithms proposed in all these works converge to ss1. There has been no distributed schemes that can provably converge to ss2 for smooth non-convex problem in the form of (2).

2 The Gradient Primal-Dual Algorithm

In this section, we introduce the gradient primal-dual algorithm (GPDA) for solving the non-convex problem (1). Let us introduce the augmented Lagrangian (AL) as

(17)

where is the dual variable. The steps of the GPDA algorithm are described in the table below.

Each iteration of the GPDA performs a gradient descent step on the AL (with stepsize being ), followed by taking one step of approximate dual gradient ascent (with stepsize ). The GPDA is closely related to the classical Uzawa primal-dual method [28], which has been utilized to solve convex saddle point problems and linearly constrained convex problems [29]. It is also related to the proximal method of multipliers (Prox-MM) first developed by Rockafellar in [30]

, in which a proximal term has been added to the augmented Lagrangian in order to make it strongly convex in each iteration. The latter method has also been applied for example, in solving certain large-scale linear programs; see

[31]. However the theoretical results derived for Prox-MM in [30, 31] are only developed for convex problems. Further, such an algorithm requires that the proximal Lagrangian to be optimized with increasing accuracy as the algorithm progresses. Finally, we note that both step (18a) and (18b) can be decomposable over the variables, therefore they are easy to be implemented in a distributed manner (as will be explained shortly).

Algorithm 1. The gradient primal-dual algorithm At iteration , initialize and . At each iteration , update variables by:

(18a) (18b)

2.1 Application in distributed optimization problem

To see how the GPDA can be specialized to the problem of distributed optimization over the network (5), let us begin by writing the optimality condition of (18a). We have

(19)

Subtracting (19) with its counterpart at iteration , we obtain

where we have defined . Rearranging, and use the fact that is the signed Laplacian matrix, and in (5), we obtain

(20)

Consider problem (5) (for simplicity assume that ), the above iteration can be implemented in a distributed manner, where each agent performs

where is the set of neighbors of node ; is the degree for node . Clearly, to implement this iteration each node only needs to know the information from the past two iterations about its immediate neighbors.

2.2 Convergence to ss1 solutions

We first state our main assumptions.

  • The function is smooth and has Lipschitz continuous gradient, as well as Lipschitz continuous Hessian:

    (21)
    (22)
  • The function is lower bounded over . Without loss of generality, assume that .

  • The constraint is feasible over . Further, is not full rank.

  • The function is coercive.

  • The function is proper and it satisfies the Kurdyka-Łojasiewicz (KŁ) property. That is, at if there exist , a neighborhood of and a continuous concave function such that: 1) and is continuously differentiable on with positive derivatives; 2) for all , satisfying , it holds that

    (23)

    where is the limiting subdifferential defined as

We comment that a wide class of functions enjoys the KŁ property, for example a semi-algebraic function is a KL function; for detailed discussions of the KŁ property we refer the readers to [32, 33].

Below we will use , , and to denote the th, the maximum, the minimum, and the smallest non-zero eigenvalues of a matrix, respectively.

The convergence of GPDA to the ss1 is similar to Theorem 3.1 in [34] and Corollay 4.1 in [34]. Algorithmically, the main difference is that the algorithms analyzed in [34] do not linearize the penalty term , and they make use of the same penalty and proximal parameters, that is, . In this work, in order to show the convergence to ss2, we need to have the freedom of tuning while fixing , therefore and have to be chosen differently. However, in terms of analysis, there is no major difference between these versions. For completeness, we only outline the key proof steps in the Appendix.

Claim 2.1

Suppose Assumptions [A1] – [A5] are satisfied. For appropriate choices of , and satisfying (67) given in the appendix, and starting from any feasible point , the GPDA converges to the set of ss1 solutions.

Further, if is a function, then converges globally to a unique point .

2.3 Convergence to ss2

One can view Claim 2.1 as some variation of known results. On the contrary, in this section we show one of the main contributions of this work, which demonstrates that GPDA can converge to solutions beyond the ss1.

To this end, first let us rewrite the update step using its first-order optimality condition as follows

Therefore the iteration can be written as

The compact way to write the above iteration is

(24)

where denotes the -by- identity matrix denotes the -by-

all zero matrix.

Next let us consider approximating near a first-order stationary solution . Let us define

Claim 2.1 implies that when are chosen appropriately, then . Therefore for any given there exists an iteration index such that the following holds

(25)

Next let us approximate the gradients around :

(26)

where in the last inequality we have defined

(27)

From Assumption [A1] and (25) we have

Therefore we have

(28)

Using the approximation (2.3), we obtain

(29)

Plugging (29) into (2.3), the iteration (2.3) can be written as

(30)

Then the above iteration can be compactly written as

(31)

for some appropriately defined vectors and matrices which are given below

(32)
(33)

It is clear that is a bounded sequence. As a direct result of Claim 2.1, we can show that every fixed point of the above iteration is an ss1 solution for problem (1).

Corollary 2.1

Suppose that Assumptions [A1]–[A5] are satisfied, and the parameters are chosen according to (67). Then every fixed point of the mapping defined below, is a first-order stationary solution for problem (1).

To proceed, we analyze the dynamics of the system (31). The following claim is a key result that characterizes the eigenvalues for the matrix . We refer the readers to the appendix for detailed proof.

Claim 2.2

Suppose Assumptions [A1] – [A5] hold, and that

Let be an ss1 solution satisfying (7), and that is a strict saddle (14). Let be the th eigenvalue for matrix . Then is invertible, and there exists a real scalar which is independent of iteration index , such that the following holds:

Theorem 2.1

Suppose that Assumptions [A1]–[A5] hold true, and that the following parameters are chosen

(34)

Suppose that are initialized randomly. Then with probability one, the iterates generated by the GPDA converges to an ss2 solution (9).

Proof. We utilize the stable manifold theorem [35, 36]. We will verify the conditions given in Theorem 7 [36] to show that the system (31) is not stable around strict saddle points.

Step 1. We will show that the mapping defined in (2.1) is diffeomorphism.

First, suppose there exists , such that . Using the definition of , and the fact that the matrix is invertible, we obtain . Using the above two results, we obtain

Then we have

This implies that

Suppose that the following is true

(35)

Then we have , implying . This says that the mapping is injective.

To show that the mapping is surjective, we see that for a given tuple , the iterate is given by

where is some function of . It is clear that is the unique solution to the following convex problem [with satisfying (35)]

Additionally, using the definition of the mapping in (2.1), we have that the Jacobian matrix for the mapping is given by

(36)

Then it has been shown in Claim 2.2 that as long as the following is true

(37)

the Jacobian matrix is invertible. By applying the inverse function theorem, is continuously differentiable.

Step 2. We can show that at a strict saddle point , for the Jacobian matrix evaluated at

, the span of the eigenvectors corresponding to the eigenvalues of magnitude less than or equal to 1 is not the full space. This is easily done since according to Claim

2.2, has one eigenvalue that is strictly greater than 1.

Step 3. Combining the previous two steps, and by utilizing Theorem 7 [36], we conclude that with random initialization, the GPDA converges to the second-order stationary solutions with probability one. Q.E.D.

3 The Gradient ADMM Algorithm

In this section, we extend the argument in the previous section to an algorithm belonging to the class of method called alternating direction method of multipliers (ADMM). Although the main idea of the analysis extends those in the previous section, the presence of two blocks of primal variables instead of one significantly complicates the analysis.

Consider the following problem

(38)

where , and ; . Clearly the global consensus problem (3) can be formulated into the above two-block problem, with the following identification: , , , , , , .

For this problem, the first- and second-order necessary conditions are given by [cf. (9)]

(39)

Similarly as before, we will refer to solutions satisfy the first line as ss1 solutions, and those that satisfy both as ss2 solutions. Therefore, a strict saddle point is defined as a point that satisfies the following conditions

(40)

Define the AL function as

The gradient ADMM (G-ADMM) algorithm that we propose is given below.

Algorithm 2. The gradient ADMM At iteration , initialize and . At each iteration , update variables by:

(41a) (41b) (41c)

We note that in the GADMM, the and steps perform gradient steps to optimize the AL, instead of performing the exact minimization as the original convex version of ADMM does [5, 37]. The reason is that the direct minimization may not be possible because the non-convexity of and makes the subproblem of minimizing the AL w.r.t. and also non-convex. Note that the gradient steps have been used in the primal updates of ADMM when dealing with convex problems, see [38], but their analyses do not extend to the non-convex setting.

It is also worth noting that the key difference between Algorithm 2 and 1 is that, in the update step (41b) of Algorithm 2, the newly updated is used. If in this step is used instead of , then Algorithm 2 is equivalent to Algorithm 1. Also there are quite a few recent works applying ADMM-type method to solve a number of non-convex problems; see, e.g., [39, 40, 41] and the references therein. However, to the best of our knowledge, these algorithms do not take exactly the same form as Algorithm 2 described above, despite the fact that their analyses all appear to be quite similar (i.e., some potential function based on the AL is shown to be descending at each iteration of the algorithm). In particular, in [41], both the and subproblems are solved using a proximal point method; In [42], the -step is solved using the gradient step, while the -step is solved using the conventional exact minimization. Of course, none of these works analyzed the convergence of these methods to ss2 solutions.

3.1 Application in global consensus problem

We discuss how Algorithm 2 can be applied to solve the global consensus (3). For this problem, the distributed nodes and the master node alternate between their updates:

Clearly, for fixed , the distributed nodes are able to perform their computation completely in parallel.

3.2 Convergence to first-order stationary solutions

First we make the following assumptions.

  • The function and are smooth and both have Lipschitz continuous gradient and Hessian, with constants , , and .

  • and are lower bounded over . Without loss of generality, assume .

  • is feasible over and ; the matrix is not full rank.

  • is a coercive function.

  • is a (KŁ) function given in [A5].

Based on the above assumptions, the convergence of Algorithm 2 to the ss1 solutions can be shown following similar line of arguments as in [39, 40, 41, 42]. However, since the exact form of this algorithm has not appeared before, for completeness we provide the proof outline in the appendix.

Claim 3.1

Suppose Assumptions [B1] – [B5] are satisfied. For appropriate choices of [see (82) in the Appendix for the precise expression], and starting from any point , Algorithm 2 converges to the set of ss1 points. Further, if is a KŁ function, then Algorithm 2 converges globally to a unique point .

3.3 Convergence to ss2 solutions

The optimality conditions for the update is given as

These conditions combined with the update rule of the dual variable give the following compact form of the algorithm

To compactly write the iterations in the form of a linear dynamic system, define

Next we approximate the iteration around a stationary solution . Suppose that and . Then similarly as the derivation of (2.3), we can write

where we have defined

(43a)
(43b)
(43c)

with the following