# Asynchronous parallel primal-dual block update methods

Recent several years have witnessed the surge of asynchronous (async-) parallel computing methods due to the extremely big data involved in many modern applications and also the advancement of multi-core machines and computer clusters. In optimization, most works about async-parallel methods are on unconstrained problems or those with block separable constraints. In this paper, we propose an async-parallel method based on block coordinate update (BCU) for solving convex problems with nonseparable linear constraint. Running on a single node, the method becomes a novel randomized primal-dual BCU with adaptive stepsize for multi-block affinely constrained problems. For these problems, Gauss-Seidel cyclic primal-dual BCU needs strong convexity to have convergence. On the contrary, merely assuming convexity, we show that the objective value sequence generated by the proposed algorithm converges in probability to the optimal value and also the constraint residual to zero. In addition, we establish an ergodic O(1/k) convergence result, where k is the number of iterations. Numerical experiments are performed to demonstrate the efficiency of the proposed method and significantly better speed-up performance than its sync-parallel counterpart.

## Authors

• 25 publications
• ### Accelerated Primal-Dual Proximal Block Coordinate Updating Methods for Constrained Convex Optimization

Block Coordinate Update (BCU) methods enjoy low per-update computational...
02/17/2017 ∙ by Yangyang Xu, et al. ∙ 0

• ### DSCOVR: Randomized Primal-Dual Block Coordinate Algorithms for Asynchronous Distributed Optimization

Machine learning with big data often involves large optimization models....
10/13/2017 ∙ by Lin Xiao, et al. ∙ 0

• ### Hybrid Jacobian and Gauss-Seidel proximal block coordinate update methods for linearly constrained convex programming

Recent years have witnessed the rapid development of block coordinate up...
08/13/2016 ∙ by Yangyang Xu, et al. ∙ 0

• ### The distributed dual ascent algorithm is robust to asynchrony

The distributed dual ascent is an established algorithm to solve strongl...
05/04/2021 ∙ by Mattia Bianchi, et al. ∙ 0

• ### Randomized Primal-Dual Proximal Block Coordinate Updates

In this paper we propose a randomized primal-dual proximal block coordin...
05/19/2016 ∙ by Xiang Gao, et al. ∙ 0

• ### Accelerated first-order primal-dual proximal methods for linearly constrained composite convex programming

Motivated by big data applications, first-order methods have been extrem...
06/29/2016 ∙ by Yangyang Xu, et al. ∙ 0

• ### Fast Cyclic Coordinate Dual Averaging with Extrapolation for Generalized Variational Inequalities

We propose the Cyclic cOordinate Dual avEraging with extRapolation (CODE...
02/26/2021 ∙ by Chaobing Song, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Modern applications in various data sciences and engineering can involve huge amount of data and/or variables

[42]. Driven by these very large-scale problems and also the advancement of multi-core computers, parallel computing has gained tremendous attention in recent years. In this paper, we consider the affinely constrained multi-block structured problem:

 minxf(x1,…,xm)+m∑i=1gi(xi), s.t. m∑i=1Aixi=b, (1)

where the variable is partitioned into multiple disjoint blocks , is a continuously differentiable and convex function, and each is a lower semi-continuous extended-valued convex but possibly non-differentiable function. Besides the nonseparable affine constraint, (1) can also include certain block separable constraint by letting part of be an indicator function of a convex set, e.g., nonnegativity constraint.

We will present a novel asynchronous (async-) parallel primal-dual method (see Algorithm 2) towards finding a solution to (1). Suppose there are multiple nodes (or cores, CPUs). We let one node (called master node) update both primal and dual variables and all the remaining ones (called worker nodes) compute and provide block gradients of to the master node. We assume each is proximable (see the definition in (5) below). In addition, we make the following assumption:

###### Assumption 0

The computation of is roughly at least times more expensive than for all , where is the number of nodes.

This assumption is only for the purpose to achieve nice practical speed-up performance of the async-parallel method. When it holds, the master node can quickly digest the block gradient information fed by all worker nodes, and thus the latters will keep working. Note that our theoretical analysis do not require this assumption. If there is only one node (i.e., ), the assumption always holds, and our method provides a novel serial primal-dual BCU with adaptive stepsize for solving (1).

### 1.1 Motivating examples

Problems in the form of (1

) arise in many areas including signal processing, machine learning, finance, and statistics. For example, the basis pursuit problem

[7] seeks a sparse solution on an affine subspace through solving the linearly constrained program:

 minx∥x∥1, s.t. Ax=b. (2)

Partitioning into multiple disjoint blocks in an arbitrary way, one can formulate (2) into the form of (1) with and each .

Another example is the portfolio optimization [28]. Suppose we have a unit of capital to invest on assets. Let be the fraction of capital invested on the -th asset and be the expected return rate of the -th asset. The goal is to minimize the risk measured by subject to total unit capital and minimum expected return , where and is the covariance matrix. To find the optimal , one can solve the problem:

 minx12x⊤Σx, s.t. m∑i=1xi≤1,m∑i=1ξixi≥c,xi≥0,∀i. (3)

Introducing slack variables to the first two inequalities, one can easily write (3) into the form of (1) with a quadratic and each being an indicator function of the nonnegativity constraint set.

) includes as a special case the dual support vector machine (SVM)

[9]. Given training data set with , let and . The dual form of the linear SVM can be written as

 minθ12θ⊤Diag(y)X⊤XDiag(y)θ−e⊤θ, s.t. y⊤θ=0,0≤θi≤C,∀i, (4)

where , and is a given number relating to the soft margin size. It is easy to formulate (4) into the form of (1) with being the quadratic objective function and each the indicator function of the set .

Finally, the penalized and constrained (PAC) regression problem [21] is also one example of (1) with and linear constraint of equations. As (that often holds for problems with massive training data), the PAC regression satisfies Assumption 1. In addition, if and , both (3) and (4) satisfy the assumption, and thus the proposed async-parallel method will be efficient when applied to these problems. Although Assumption 1 does not hold for (2) as , our method running on a single node can still outperform state-of-the-art non-parallel solvers; see the numerical results in section 4.1.

### 1.2 Block coordinate update

The block coordinate update (BCU) method breaks possibly very high-dimensional variable into small pieces and renews one at a time while all the remaining blocks are fixed. Although the problem (1) can be extremely large-scale and complicated, BCU solves a sequence of small-sized and easier subproblems. As (1) owns nice structures, e.g., coordinate friendly [30], BCU can not only have low per-update complexity but also enjoy faster overall convergence than the method that updates the whole variable every time. BCU has been applied to many unconstrained or block-separably constrained optimization problems (e.g., [39, 40, 29, 33, 44, 35, 45, 20]), and it has also been used to solve affinely constrained separable problems, i.e., in the form of (1) without term (e.g., [12, 11, 16, 17, 18]). However, only a few existing works (e.g., [19, 14, 13]) have studied BCU on solving affinely constrained problems with a nonseparable objective function.

### 1.3 Asynchronization

Parallel computing methods distribute computation over and collect results from multiple nodes. Synchronous (sync) parallel methods require all nodes to keep in the same pace. Upon all nodes finish their own computation, they altogether proceed to the next step. This way, the faster node has to wait for the slowest one, and that wastes a lot of waiting time. On the contrary, async-parallel methods keep all nodes continuously working and eliminate the idle waiting time. Numerous works (e.g., [34, 26, 27, 31]) have demonstrated that async-parallel methods can achieve significantly better speed-up than their sync-parallel counterparts.

Due to lack of synchronization, the information used by a certain node may be outdated. Hence the convergence of an async-parallel method cannot be easily inherited from its non-parallel counterpart but often requires a new tool of analysis. Most existing works only analyze such methods for unconstrained or block-separably constrained problems. Exceptions include [41, 46, 3, 4] that consider separable problems with special affine constraint.

### 1.4 Related works

Recent several years have witnessed the surge of async-parallel methods partly due to the increasingly large scale of data/variable involved in modern applications. However, only a few existing works discuss such methods for affinely constrained problems. Below we review the literature of async-parallel BCU methods in optimization and also primal-dual BCU methods for affinely constrained problems.

It appears that the first async-parallel method was proposed by Chazan and Miranker [5] for solving linear systems. Later, such methods have been applied in many others fields. In optimization, the first async-parallel BCU method was due to Bertsekas and Tsitsiklis [1] for problems with a smooth objective. It was shown that the objective gradient sequence converges to zero. Tseng [38] further analyzed its convergence rate and established local linear convergence by assuming isocost surface separation and a local Lipschitz error bound on the objective. Recently, [27, 26] developed async-parallel methods based on randomized BCU for convex problems with possibly block separable constraints. They established convergence and also rate results by assuming a bounded delay on the outdated block gradient information. The results have been extended to the case with unbounded probabilistic delay in [32], which also shows convergence of the async-parallel BCU methods for nonconvex problems. On solving problems with convex separable objective and linear constraints, [41] proposed to apply the alternating direction method of multipliers (ADMM) in an asynchronous and distributive way. Assuming a special structure on the linear constraint, it established ergodic convergence result, where is the total number of iterations. In [46, 3, 4], the async-ADMM is applied to distributed multi-agent optimization, which can be equivalently formulated into (1) with and consensus constraint. Among them, [46] showed sublinear convergence of the async-ADMM for convex problems, and [4] established its linear convergence for strongly convex problems while [3] also considered nonconvex cases. The works [31, 8] developed async-parallel BCU methods for fixed-point or monotone inclusion problems. Although these settings are more general (including convex optimization as a special case), strong monotonicity (similar to strong convexity in optimization) is needed to establish convergence rate results.

Running on a single node, the proposed async-parallel method reduces to a serial randomized primal-dual BCU. In the literature, various Gauss-Seidel (GS) cyclic BCU methods have been developed for solving separable convex programs with linear constraints. Although a cyclic primal-dual BCU can empirically work well, in general it may diverge [12, 6]. To guarantee convergence, additional assumptions besides convexity must be made, such as strong convexity on part of the objective [15, 2, 22, 36, 25, 24, 10] and orthogonality properties of block matrices in the linear constraint [6]. Without these assumptions, modifications to the algorithm are necessary for convergence, such as further correction step after each cycle of updates [17, 18], random permutation of all blocks before each cycle of updates [37], Jacobi-type update [11, 16] that is essentially linearized augmented Lagrange method (ALM), and hybrid Jacobi-GS update [36, 23, 43]. Different from these modifications, our algorithm simply employs randomization in selecting block variable and can perform significantly better than Jacobi-type methods. In addition, convergence is guaranteed with mere convexity assumption and thus better than those results for GS-type methods.

### 1.5 Contributions

The contributions are summarized as follows.

• We propose an async-parallel BCU method for solving multi-block structured convex programs with linear constraint. The algorithm is the first async-parallel primal-dual method for affinely constrained problems with nonseparable objective. When there is only one node, it reduces to a novel serial primal-dual BCU method with stepsize adaptive to blocks.

• Merely with convexity, convergence of the proposed method is guaranteed. We first establish convergence of the serial BCU method. We show that the objective value converges in probability to the optimal value and also the constraint residual to zero. In addition, we establish an ergodic convergence rate result. Then through bounding a cross term involving delayed block gradient, we prove that similar convergence results hold for the async-parallel BCU method if a delay-dependent stepsize is chosen.

• We implement the proposed algorithm and apply it to the basis pursuit, quadratic programming, and also the support vector machine problems. Numerical results demonstrate that the serial BCU is comparable to or better than state-of-the-art methods. In addition, the async-parallel BCU method can achieve significantly better speed-up performance than its sync-parallel counterpart.

### 1.6 Notation and Outline

We use bold small letters for vectors and bold capital letters for matrices. denotes the integer set . represents a vector with for its -th block and zero for all other blocks. We denote as the Euclidean norm of and for a symmetric positive semidefinite matrix . We reserve

for the identity matrix, and its size is clear from the context.

stands for the expectation about conditional on previous history . We use for convergence in probability of a random vector sequence to .

For ease of notation, we let , , and . Denote

 Φ(¯x,x,λ)=F(¯x)−F(x)−⟨λ,A¯x−b⟩.

Then is a saddle point of (1) if and .

The proximal operator of a function is defined as

 proxψ(x)=argminyψ(y)+12∥x−y∥2. (5)

If has a closed-form solution or is easy to compute, we call proximable.

Outline. The rest of the paper is organized as follows. In section 2, we present the serial and also async-parallel primal-dual BCU methods for (1). Convergence results of the algorithms are shown in section 3. Section 4 gives experimental results, and finally section 5 concludes the paper.

## 2 Algorithm

In this section, we propose an async-parallel primal-dual method for solving (1). Our algorithm is a BCU-type method based the augmented Lagrangian function of (1):

 Lβ(x,λ)=f(x)+g(x)−⟨λ,Ax−b⟩+β2∥Ax−b∥2,

where is the multiplier (or augmented Lagrangian dual variable), and is a penalty parameter.

### 2.1 Non-parallel method

For ease of understanding, we first present a non-parallel method in Algorithm 1. At every iteration, the algorithm chooses one out of block uniformly at random and renews it by (6) while fixing all the remaining blocks. Upon finishing the update to , it immediately changes the multiplier . The linearization to possibly complicated smooth term greatly eases the -subproblem. Depending on the form of , we can choose appropriate to make (6) simple to solve. Since each is proximable, one can always easily find a solution to (6) if . For even simpler such as -norm and indicator function of a box constraint set, we can set to a diagonal matrix and have a closed-form solution to (6).

Randomly choosing a block to update has advantages over the cyclic way in both theoretical and empirical perspectives. We will show that this randomized BCU has guaranteed convergence with mere convexity other than strong convexity assumed by the cyclic primal-dual BCU. In addition, randomization enables us to parallelize the algorithm in an efficient way as shown in Algorithm 2.

### 2.2 Async-parallel method

Assume there are nodes. Let the data and variables be stored in a global memory accessible to every node. We let one node (called master node) update both primal variable and dual variable and the remaining ones (called worker nodes) compute block gradients of and provide them to the master node. The method is summarized in Algorithm 2. We make a few remarks on the algorithm as follows.

• Special case: If there is only one node (i.e., ), the algorithm simply reduces to the non-parallel Algorithm 1.

• Iteration number: Only the master node increases the iteration number , which counts the times is updated and also the number of used block gradients. Hence, even if , Algorithm 2 does not reduce to its sync-parallel counterpart.

• Delayed information: Since all worker nodes provide block gradients to the master node, we cannot guarantee every computed block gradient will be immediately used to update . Hence, in (9), may be not equal but a delayed (i.e., outdated) block gradient. The delay is usually in the same order of and can affect the stepsize, but the affect is negligible as the block number is greater than the delay in an order (see Theorem 3.8).

Because -blocks are computed in the master node, the values of and used in the update are always up-to-date. One can let worker nodes compute new ’s and then feed them (or also the changes in ) to the master node. That way, and will also be outdated when computing -blocks.

• Load balance: Under Assumption 1, if (9) is easy to solve (e.g., ) and all nodes have similar computing power, the master node will have used all received block gradients before a new one comes. We let the master node itself also compute block gradient if there is no new one sent from any worker node. This way, all nodes work continuously without idle wait. Compared to its sync-parallel counterpart that typically suffers serious load imbalance, the async-parallel can achieve better speed-up; see the numerical results in section 4.3.

## 3 Convergence analysis

In this section, we present convergence results of the proposed algorithm. First, we analyze the non-parallel Algorithm 1. We show that the objective value and the residual converge to the optimal value and zero respectively in probability. In addition, we establish a sublinear convergence rate result based on an averaged point. Then, through bounding a cross term involving the delayed block gradient, we establish similar results for the async-parallel Algorithm 2.

Throughout our analysis, we make the following assumptions.

###### Assumption 1 (Existence of a solution)

There exists one pair of primal-dual solution such that and .

###### Assumption 2 (Gradient Lipschitz continuity)

There exist constants ’s and such that for any and ,

 ∥∇if(x+Uiy)−∇if(x)∥≤Li∥yi∥,i=1,…,m,

and

 ∥∇f(x+Uiy)−∇f(x)∥≤Lr∥yi∥,i=1,…,m.

Denote . Then under the above assumption, it holds that

 f(x+Uiy)≤f(x)+⟨∇if(x),yi⟩+Li2∥yi∥2,∀i,∀x,y. (10)

### 3.1 Convergence results of Algorithm 1

We first establish several lemmas, which will be used to show our main convergence results.

###### Lemma 3.1

Let be the sequence generated from Algorithm 1. Then for any independent of , it holds that

 Eik⟨∇ikf(xk),xk+1ik−xik⟩≥−(1−1m)[f(xk)−f(x)]+Eik[f(xk+1)−f(x)−12∥xk+1−xk∥2L].

Proof. We write

. For the first term, we use the uniform distribution of

on and the convexity of to have

 Eik⟨∇ikf(xk),xkik−xik⟩=1m⟨∇f(xk),xk−x⟩≥1m[f(xk)−f(x)],

and for the second term, we use (10) to have

 ⟨∇ikf(xk),xk+1ik−xkik⟩≥ f(xk+1)−f(xk)−Lik2∥xk+1ik−xkik∥2 (11) = f(xk+1)−f(xk)−12∥xk+1−xk∥2L. (12)

Combining the above two inequalities gives the desired result.

###### Lemma 3.2

For any independent of such that , it holds

 Eik⟨−A⊤ik(λk−βrk),xk+1ik−xik⟩ = −(1−1m)(−⟨λk,rk⟩+β∥rk∥2)−Eik⟨λk+1,rk+1⟩+(β−ρ)Eik∥rk+1∥2 −β2Eik[∥rk+1∥2−∥rk∥2+∥xk+1−xk∥2A⊤A].

Proof. Let . Then

 Eik⟨ykik,xk+1ik−xik⟩= Eik⟨ykik,xkik−xik⟩+Eik⟨ykik,xk+1ik−xkik⟩ (13) = 1m⟨yk,xk−x⟩+Eik⟨yk,xk+1−xk⟩ (14) = −(1−1m)⟨yk,xk−x⟩+Eik⟨yk,xk+1−x⟩. (15)

Note and . In addition, from , we have . Hence,

 ⟨yk,xk+1−x⟩=⟨−A⊤λk+1,xk+1−x⟩+(β−ρ)∥rk+1∥2−β⟨A(xk+1−xk),A(xk+1−x)⟩. (16)

Noting

 ⟨A(xk+1−xk),A(xk+1−x)⟩=12[∥rk+1∥2−∥rk∥2+∥xk+1−xk∥2A⊤A],

we complete the proof by plugging (16) into (13).

###### Lemma 3.3

For any independent of , it holds

 Eik⟨~∇gik(xk+1ik),xk+1ik−xik⟩≥Eik[g(xk+1)−g(x)]−(1−1m)[g(xk)−g(x)],

where denotes the subgradient of at .

Proof. From the convexity of , it follows that

 Eik⟨~∇gik(xk+1ik),xk+1ik−xik⟩≥Eik[gik(xk+1ik)−gik(xik)]. (17)

Writing and taking the conditional expectation give

 Eik[gik(xk+1ik)−gik(xik)]=1m[g(xk)−g(x)]+Eik[g(xk+1)−g(xk)].

We obtain the desired result by plugging the above equation into (17).

Using the above three lemmas, we show an inequality after each iteration of the algorithm.

###### Theorem 3.4 (Fundamental result)

Let be the sequence generated from Algorithm 1. Then for any such that , it holds

 Eik[F(xk+1)−F(x)−⟨λk+1,rk+1⟩+(β−ρ)∥rk+1∥2−β2∥rk+1∥2] (18) (19) ≤ (1−1m)[F(xk)−F(x)−⟨λk,rk⟩+β∥rk∥2]−β2∥rk∥2+12∥xk−x∥2P, (20)

where .

Proof. Since is one solution to (6), there is a subgradient of at such that

 ∇ikf(xk)−A⊤ik(λk−βrk)+~∇gik(xk+1ik)+Pik(xk+1ik−xkik)=0,

Hence,

 Eik⟨∇ikf(xk)−A⊤ik(λk−βrk)+~∇gik(xk+1ik)+Pik(xk+1ik−xkik),xk+1ik−xik⟩=0. (21)

In the above equation, using Lemmas 3.1 through 3.3 and noting

 ⟨Pik(xk+1ik−xkik),xk+1ik−xik⟩=12[∥xk+1−x∥2P−∥xk−x∥2P+∥xk+1−xk∥2P], (22)

we have the desired result.

Now we are ready to show the convergence results of Algorithm 1.

###### Theorem 3.5 (Global convergence in probability)

Let be the sequence generated from Algorithm 1. If and , then

 F(xk)p→F(x∗), and ∥rk∥p→0.

Proof. Note that

Hence, taking expectation over both sides of (18) and summing up from through yield

 E[Φ(xK+1,x,λ)+⟨λ−λK+1,rK+1⟩]+1mK∑k=1E[Φ(xk,x,λ)+⟨λ−λk,rk⟩]+(β−ρ)E∥rK+1∥2 +(βm−ρ)K∑k=1∥rk∥2−β2E∥rK+1∥2+12E∥xK+1−x∥2P+12K∑k=0E∥xk+1−xk∥2P−L−βA⊤A ≤ (1−1m)[F(x0)−F(x)−⟨λ0,r0⟩+β∥r0∥2]+12∥x0−x∥2P−β2∥r0∥2. (23)

Since , it follows from Young’s inequality that

 ⟨λ−λK+1,rK+1⟩+(β−ρ)∥rK+1∥2−β2∥rK+1∥2≥−12β∥λ−λK∥2.

 K∑k=1⟨λ−λk,rk⟩= 12ρK∑k=1[∥λ−λk∥2−∥λ−λk−1∥2+∥λk−1−λk∥2] = 12ρ[∥λ−λK∥2−∥λ−λ0∥2]+ρ2K∑k=1∥rk∥2.

Plugging the above two equations into (3.1) and using , we have

 EΦ(xK+1,x,λ)+1mK∑k=1EΦ(xk,x,λ)+(βm+ρ2m−ρ)K∑k=1E∥rk∥2+(12mρ−12β)E∥λ−λK∥2 +12E∥xK+1−x∥2P+12K∑k=0E∥xk+1−xk∥2P−L−βA⊤A ≤ (1−1m)[F(x0)−F(x)+β∥r0∥2]+12∥x0−x∥2P−β2∥r0∥2+12mρE∥λ∥2. (24)

Letting in the above equality, we have from and that

 1mK∑k=1EΦ(xk,x∗,λ∗)+(βm+ρ2m−ρ)K∑k=1E∥rk∥2<∞,∀K,

which together with implies that

 limk→∞EΦ(xk,x∗,λ∗)=0, (25a) limk→∞E∥rk∥=0. (25b)

For any , it follows from the Markov’s inequality that

 Prob(∥rk∥>ϵ)≤E∥rk∥ϵ→0, as k→∞,

and

 Prob(|F(xk)−F(x∗)|≥ϵ) (26) = Prob(F(xk)−F(x∗)≥ϵ)+Prob(F(xk)−F(x∗)≤−ϵ) (27) ≤ Prob(F(xk)−F(x∗)−⟨λ∗,rk⟩≥ϵ2)+Prob(⟨λ∗,rk⟩≥ϵ2)+Prob(−⟨λ∗,rk⟩≥ϵ) (28) ≤ Prob(F(xk)−F(x∗)−⟨λ∗,rk⟩≥ϵ2)+Prob(∥λ∗∥⋅∥rk∥≥ϵ2)+Prob(∥λ∗∥⋅∥rk∥≥ϵ) (29) → 0, as k→∞, (30)

where in the first inequality, we have used the fact , and the last equation follows from (25) and the Markov’s inequality. This completes the proof.

Given any and

, we can also estimate the number of iterations for the algorithm to produce a solution satisfying an error bound

with probability no less than .

###### Definition 3.1 ((ϵ,σ)-solution)

Given and , a random vector is called an -solution to (1) if and

###### Theorem 3.6 (Ergodic convergence rate)

Let be the sequence generated from Algorithm 1. Assume and . Let and

 C0=(1−1m)[F(x0)−F(x)]+12∥x0−x∥2P+(β2−βm)∥r0∥2.

Then

 −11+K/m(C0+2mρ∥λ∗∥2)≤E[F(¯xK+1)−F(x∗)]≤C01+K/m, (31) E∥A¯xK+1−b∥≤11+K/m(C0+12mρ(1+∥λ∗∥)2). (32)

In addition, given any and , if

 K≥m⋅max⎛⎜⎝C0+12mρ(1+∥λ∗∥)2ϵσ−1,5C0+132mρ∥λ∗∥2ϵσ−1⎞⎟⎠, (33)

then is an -solution to (1).

Proof. Since is convex, it follows from (3.1) that

 EΦ(