# Communication-efficient Algorithm for Distributed Sparse Learning via Two-way Truncation

We propose a communicationally and computationally efficient algorithm for high-dimensional distributed sparse learning. At each iteration, local machines compute the gradient on local data and the master machine solves one shifted l_1 regularized minimization problem. The communication cost is reduced from constant times of the dimension number for the state-of-the-art algorithm to constant times of the sparsity number via Two-way Truncation procedure. Theoretically, we prove that the estimation error of the proposed algorithm decreases exponentially and matches that of the centralized method under mild assumptions. Extensive experiments on both simulated data and real data verify that the proposed algorithm is efficient and has performance comparable with the centralized method on solving high-dimensional sparse learning problems.

## Authors

• 4 publications
• 29 publications
• ### Efficient Distributed Learning with Sparsity

We propose a novel, efficient approach for distributed sparse learning i...
05/25/2016 ∙ by Jialei Wang, et al. ∙ 0

• ### A Provably Communication-Efficient Asynchronous Distributed Inference Method for Convex and Nonconvex Problems

This paper proposes and analyzes a communication-efficient distributed o...
03/16/2019 ∙ by Jineng Ren, et al. ∙ 0

• ### Communication-efficient Distributed Sparse Linear Discriminant Analysis

We propose a communication-efficient distributed estimation method for s...
10/15/2016 ∙ by Lu Tian, et al. ∙ 0

• ### Computationally Efficient Robust Estimation of Sparse Functionals

Many conventional statistical procedures are extremely sensitive to seem...
02/24/2017 ∙ by Simon S. Du, et al. ∙ 0

• ### Feature-Distributed SVRG for High-Dimensional Linear Classification

Linear classification has been widely used in many high-dimensional appl...
02/10/2018 ∙ by Gong-Duo Zhang, et al. ∙ 0

• ### Proximal SCOPE for Distributed Sparse Learning: Better Data Partition Implies Faster Convergence Rate

Distributed sparse learning with a cluster of multiple machines has attr...
03/15/2018 ∙ by Shen-Yi Zhao, et al. ∙ 0

• ### Distributed Learning with Low Communication Cost via Gradient Boosting Untrained Neural Network

For high-dimensional data, there are huge communication costs for distri...
11/10/2020 ∙ by Xiatian Zhang, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## I Introduction

One important problem in machine learning is to find the minimum of the expected loss,

 minθE\Xb,Y∼\cD[l(Y,⟨\Xb,θ⟩)]. (1)

Here

is a loss function and

has a distribution . In practice, the minimizer needs to be estimated by observing samples drawn from distribution . In many applications or are very large, so distributed algorithms are necessary in such case. Without loss of generality, assume that and that the observations of -th machine are . We consider the high-dimensional learning problem where the dimension can be very large, and the effective variables are supported on and . Extensive efforts have been made to develop batch algorithms [1, 2, 3], which provide good convergence guarantees in optimization. However, when is large, batch algorithms are inefficiency, which takes at least time per iteration. Therefore, an emerging recent interest is observed to address this problem using the distributed optimization frameworks [5, 6, 7], which is more efficient than the stochastic algorithms. One important issue of existing distributed optimization for sparse learning is that they did not take advantage of the sparse structure, thus they have the same communication costs with general dense problems. In this paper, we propose a novel communication-efficient distributed algorithm to explicitly leverage the sparse structure for solving large scale sparse learning problems. This allows us to reduce the communication cost from in existing works to , while we still maintaining nearly the same performance under mild assumptions.

Notations For a sequence of numbers , we use to denote a sequence of numbers such that for some positive constant . Given two sequences of numbers and , we say if and if . The notation denotes that and

. For a vector

, the -norm of is defined as , where ; the -norm of is defined as the number of its nonzero entries; the support of is defined as . For simplicity, we use to denote the set . For a matrix , we define the -norm of as . Given a number , the hard thresholding of a vector is defined by keeping the largest entries of (in magnitude) and setting the rest to be zero. Given a subset of index set , the projection of a vector on is defined by

 \cPS(\vb)j=0,ifj∉Sand\cPS(\vb)j=\vbj,ifj∈S.

is also denoted as for short.

### I-a Related work

There is much previous work on distributed optimizations such as (Zinkevich et al. [8]; Dekel et al. [9]; Zhang et al. [10]; Shamir and Srebro [11]; Arjevani and Shamir [12]; Lee et al. [6]; Zhang and Xiao [13]). Initially, most distributed algorithms used averaging estimators formed by local machines (Zinkevich et al. [8]; Zhang et al. [10]). Then Zhang and Xiao [13], Shamir et al. [14] and Lee et al. [15] proposed more communication-efficient distributed optimization algorithms. More recently, using ideas of the approximate Newton-type method, Jordan et al. [5] and Wang et al. [7] further improved the computational efficiency of this type of method.

Many gradient hard thresholding approaches are proposed in recent years such as (Yuan et al. [16]; Li et al. [17]; Jain et al. [18]

). They showed that under suitable conditions, the hard thresholding type first-order algorithms attain linear convergence to a solution which has optimal estimation accuracy with high probability. However, to the best of our knowledge, hard thresholding techniques applied to approximate Newton-type distributed algorithms has not been considered yet. So in this paper, we present some initial theoretical and experimental results on this topic.

## Ii Algorithm

In this section, we explain our approach to estimating the that minimizes the expected loss. The detailed steps are summarized in Algorithm 1.

First the empirical loss at each machine is defined as

 \cLj(θ)=1n∑ni=1l(yji,⟨\xbji,θ⟩),  where  j∈[m].

At the beginning of algorithm, we solve a local Lasso subproblem to get an initial point. Specifically, at iteration , the master machine solves the minimization problem

 γ0=\argmin\cL1(θ)+μ0∥θ∥1. (2)

The initial point is formed by keeping the largest elements of the resulting minimizer and setting the other elements to be zero, i.e., . Then, is broadcasted to the local machines, where it is used to compute a gradient of local empirical loss at , that is, . The local machines project on the support of and transmit the projection back to the master machine. Later at -th iteration (), the master solves a shifted regularized minimization subproblem:

 γh+1 =\argminθ\cL1(θ)+μh+1∥θ∥1 (3)

Again the minimizer is truncated to form , and this quantity is communicated to the local machines, where it is used to compute the local gradient as before.

Solving subproblem is inspired by the approach of Wang et al. [7] and Jordan et al.[5]. Note that the formulation takes advantage of both global first-order information and local higher-order information. Specially, assuming the and has an invertible Hessian, the solution of has the following closed form

 γh+1=θh−∇2\cL1(θh)−1(\cPSh[1m∑mj=1∇\cLj(θh)]),

which is similar to a Newton updating step. Note that here we add a projection procedure to reduce the number of nonzeros that need to be communicated to the master machine. This procedure is reasonable intuitively. First, when is close to , the elements of outside the support should be very small, so nominally little error is incurred in the truncation step. Second, when is also close to , the lost part has even more minimal effects on the inner product in subproblem . Third, we leave in out of the truncation to maintain the formulation as unbiased.

## Iii Theoretical Analysis

### Iii-a Main Theorem

We present some theoretical analysis of the proposed algorithm in this section. The loss is a -smooth function of the second argument, i.e.,

Moreover, the third derivative with respect to its second argument, , is bounded by a constant , i.e.,

The empirical loss function computed on the first machine satisfies that: , we have

 \cL1(θ∗+Δ)−\cL1(θ∗)−⟨∇\cL1(θ∗),Δ⟩≥κ∥Δ∥22,

where is defined as

 \cC(S,3)={Δ∈\RRd| ∥ΔSc∥1≤3∥ΔS∥1}.

The , and defined in Algorithm 1 satisfy the following condition: there exists some positive constants and and such that for ,

 ∥∥(γh−θ∗)(Sh)c∥∥1 ≤τ1∥∥γh−θ∗∥∥1 ∥∥(γh+1−θ∗)Sh+1∖Sh∥∥1 ≤τ2∥∥γh+1−θ∗∥∥1.

In practice, both and are very small even after only one round of communication and will decrease to fast in the later steps. For simplicity, we define the following notation:

 ¯¯¯¯¯¯¯¯¯\cL1(θ∗,θh) :=\cL1(θ∗)+⟨1m∑mj=1∇\cLj(θh)−∇\cL1(θh),θ⟩, ~\cL1(θ∗,θh) :=\cL1(θ∗) +⟨\cPSh[1m∑mj=1∇\cLj(θh)−∇\cL1(θh)],θ⟩.

Now we state our main theorem. Suppose that Assumption III-A, III-A, and III-A hold. Let with and

 μh+1=4∥∥1m∑mj=1∇\cLj(θ∗)∥∥∞ +2L(maxj,i∥xj,i∥2∞)⋅[2√log(2d/δ)n+ρ]∥θh−θ∗∥1 +2M(maxj,i∥xj,i∥3∞)∥θh−θ∗∥21, (4)

where .

Then with probability at least , we have that

 ∥θh+1−θ∗∥1≤C2sκ∥∥1m∑mj=1∇\cLj(θ∗)∥∥∞ +C2s2κL⋅maxj,i∥\xbji∥2∞⋅[2√log(2d/δ)n+ρ]∥θh−θ∗∥1 +C2s2κM⋅maxj,i∥\xbji∥3∞⋅∥θh−θ∗∥21, and ∥θh+1−θ∗∥2≤C3√sκ∥∥1m∑mj=1∇\cLj(θ∗)∥∥∞ +C3√s2κM⋅maxj,i∥\xbji∥3∞⋅∥θh−θ∗∥21,

where and are positive constants independent of . The theorem immediately implies the following convergence result. Suppose that for all

 M⋅(maxj,i∥xji∥3∞)∥θh−θ∗∥1≤ L⋅maxj,i∥xji∥2∞⎡⎣2√log(2d/δ)n+ρ⎤⎦, (5)

where .

Then under the assumption of Theorem III-A we have

 ∥θh+1−θ∗∥1 ≤1−ah+1n1−an⋅C2sκ⋅∥∥1m∑mj=1∇\cLj(θ∗)∥∥∞ +ah+1n∥θ0−θ∗∥1, |θh+1−θ∗∥2 ≤1−ah+1n1−an⋅C3√sκ⋅∥∥1m∑mj=1∇\cLj(θ∗)∥∥∞ +ahnbn∥θ0−θ∗∥1,

where

 an= C2sκL⋅maxj,i∥xji∥2∞⋅[2√log(2d/δ)n+ρ]

and

 bn= C3√sκL⋅maxj,i∥xji∥2∞⋅[2√log(2d/δ)n+ρ],

where and are defined in Theorem III-A and independent of . From the conclusion, we know that the hard thresholding parameter can be chosen as , where can be a moderate constant larger than . By contrast, previous work such as [17] solving a nonconvex minimization problem subject to constraint requires that , where is the condition number of the object function. Moreover, instead of only hard thresholding on the solution of Lasso subproblems, we also do projection on the gradients in (II). These help us reduce the communication cost from to .

### Iii-B Sparse Linear Regression

In the sparse linear regression, data

are generated according to the model

 yji=⟨\xbji,θ∗⟩+ϵji, (6)

where the noise

are i.i.d subgaussian random variables with zero mean. Usually the the loss function for this problem is the squared loss function

, which is -smooth.

Combining Corollary III-A with some intermediate results obtained from [19, 20] and [21], we have the following bound for the estimation error. Suppose the design matrix and noise are subgaussian, Assumption III-A holds and is defined as (4). Then under the sparse linear model, we have the following estimation error bounds with probability at least :

 ∥θh+1−θ∗∥1≲1−ah+1n1−an⋅C2sσσXκ√log(d/δ)mn +ah+1nsσσXκ√log(nd/δ)n

and

 ∥θh+1−θ∗∥2≲1−ah+1n1−an⋅C3√sσσXκ√log(d/δ)mn +ahnbnsσσXκ√log(nd/δ)n,

where and are defined in Theorem III-A, and where

 an=C2sκσ2Xlog(mndδ)[2√log(2d/δ)n+ρ]

and

 bn=C3√sκσ2Xlog(mndδ)[2√log(2d/δ)n+ρ].

Under certain conditions we can further simplify the bound and have an insight of the relation between . When , it is easy to see by choosing

 μh+1≍√logdmn+√logdn[s(√logdn+ρ)]h+1

and there holds the following error bounds with high probabiltiy:

 ∥θh+1−θ∗∥1 ≲s√logdmn+s√logdn[s(√logdn+ρ)]h+1, ∥θh+1−θ∗∥2 ≲√slogdmn+√slogdn[s(√logdn+ρ)]h+1.

### Iii-C Sparse Logistic Regression

Combining Corollary III-A with some intermediate results obtained from [7] and [22]

, we now can give a similar result about the estimation error bound for sparse logistic regression. The explicit form is omitted due to the limitation of spaces.

## Iv Experiments

Now we test our algorithm on both simulated data and real data. In both settings, we compare our algorithm with various advanced algorithms. These algorithms are:

• EDSL: the state-of-the-art approach proposed by Jialei Wang et al. [7].

• Centralize: using all data, one machine solves the centralized loss minimization problem with regularization. This procedure is communication expensive or requires much larger storage.

• Local: the first machine solves the local regularized loss minimization problem with only the data stored on this machine, ignoring all the other data.

• Two-way Truncation: the proposed sparse learning approach which further improves the communication efficiency.

### Iv-a Simulated data

The simulated data

is sampled from multivariate Gaussian distribution with zero mean and covariance matrix

. We choose two different covariance matrices: for a well-conditioned situation and for an ill-conditioned situation. The noise in sparse linear model () is set to be a standard Gaussian random variable. We set the true parameter to be -sparse where all the entries are zero except that the first

entries are i.i.d random variables from a uniform distribution in [0,1]. Under both two models, we set the hard thresholding parameter

greater than s but less than .

Here we compare the algorithms in different settings of and plot the estimation error over rounds of communications. The results of sparse linear regression and sparse logistic regression are showed in Figure 1 and Figure 2. We can observe from these plots that:

• First, there is indeed a large gap between the local estimation error and the centralized estimation error. The estimation errors of EDSL and the Two-way Truncation decrease to the centralized one in the first several rounds of communications.

• Second, the Two-way Truncation algorithm is competitive with EDSL in both statistical accuracy and convergence rate as the theory indicated. Since it can converge in at least the same speed as EDSL’s and requires less communication and computation cost in each iteration, overall it’s more communicationally and computationally efficient.

The above results support the theory that the Two-way Truncation approach is indeed more efficient and competitive to the centralized approach and EDSL.

### Iv-B Real data

In this section, we examine the above sparse learning algorithms on real-world datasets. The data comes from UCI Machine Learning Repository and the LIBSVM website 222https://www.csie.ntu.edu.tw/cjlin/libsvmtools/datasets/

. The high-dimensional data ’dna’ and ’a9a’ are used in the regression model and classification model respectively. We randomly partition the data in

for training, validation and testing respectively. Here the data is divided randomly on machines and processed by algorithms mentioned above. The results are summarized in Figure 3. These results in real-world data experiments again validate the theoretical analysis that the proposed Two-way Truncation approach is a quite effective sparse learning method with very small communication and computation costs.

## V Conclusions

In this paper we propose a novel distributed sparse learning algorithm with Two-way Truncation. Theoretically, we prove that the algorithm gives an estimation that converges to the minimizer of the expected loss exponentially and attain nearly the same statistical accuracy as EDSL and the centralized method. Due to the truncation procedure, this algorithm is more efficient in both communication and computation. Extensive experiments on both simulated data and real data verify this statement.

## Acknowledgment

The authors graciously acknowledge support from NSF Award CCF-1217751 and DARPA Young Faculty Award N66001-14-1-4047 and thank Jialei Wang for very useful suggestion.

## References

• [1] Jerome Friedman, Trevor Hastie, Holger Höfling, Robert Tibshirani, et al., “Pathwise coordinate optimization,” The Annals of Applied Statistics, vol. 1, no. 2, pp. 302–332, 2007.
• [2] A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM J. Imaging Sciences, vol. 2, no. 1, pp. 183–202, 2009.
• [3] Lin Xiao and Tong Zhang, “A proximal-gradient homotopy method for the sparse least-squares problem,” SIAM Journal on Optimization, vol. 23, no. 2, pp. 1062–1091, 2013.
• [4] Peter Bühlmann and Sara Van De Geer, Statistics for high-dimensional data: methods, theory and applications, Springer Science & Business Media, 2011.
• [5] Michael I Jordan, Jason D Lee, and Yun Yang, “Communication-efficient distributed statistical learning,” stat, vol. 1050, pp. 25, 2016.
• [6] Jason D Lee, Qihang Lin, Tengyu Ma, and Tianbao Yang, “Distributed stochastic variance reduced gradient methods and a lower bound for communication complexity,” arXiv preprint arXiv:1507.07595, 2015.
• [7] Jialei Wang, Mladen Kolar, Nathan Srebro, and Tong Zhang, “Efficient distributed learning with sparsity,” arXiv preprint arXiv:1605.07991, 2016.
• [8] Martin Zinkevich, Markus Weimer, Lihong Li, and Alex J Smola,

in Advances in neural information processing systems, 2010, pp. 2595–2603.
• [9] Ofer Dekel, Ran Gilad-Bachrach, Ohad Shamir, and Lin Xiao, “Optimal distributed online prediction using mini-batches,” Journal of Machine Learning Research, vol. 13, no. Jan, pp. 165–202, 2012.
• [10] Yuchen Zhang, Martin J Wainwright, and John C Duchi, “Communication-efficient algorithms for statistical optimization,” in Advances in Neural Information Processing Systems, 2012, pp. 1502–1510.
• [11] Ohad Shamir and Nathan Srebro, “Distributed stochastic optimization and learning,” in Communication, Control, and Computing (Allerton), 2014 52nd Annual Allerton Conference on. IEEE, 2014, pp. 850–857.
• [12] Yossi Arjevani and Ohad Shamir, “Communication complexity of distributed convex learning and optimization,” in Advances in Neural Information Processing Systems, 2015, pp. 1756–1764.
• [13] Yuchen Zhang and Xiao Lin, “Disco: Distributed optimization for self-concordant empirical loss.,” in ICML, 2015, pp. 362–370.
• [14] Ohad Shamir, Nathan Srebro, and Tong Zhang, “Communication-efficient distributed optimization using an approximate newton-type method.,” in Proceedings of the International Conference on Machine Learning, 2014, vol. 32, pp. 1000–1008.
• [15] Jason D Lee, Yuekai Sun, Qiang Liu, and Jonathan E Taylor, “Communication-efficient sparse regression: a one-shot approach,” arXiv preprint arXiv:1503.04337, 2015.
• [16] Xiaotong Yuan, Ping Li, and Tong Zhang, “Gradient hard thresholding pursuit for sparsity-constrained optimization,” in Proceedings of the International Conference on Machine Learning, 2014, pp. 127–135.
• [17] Xingguo Li, Tuo Zhao, Raman Arora, Han Liu, and Jarvis Haupt,

“Stochastic variance reduced optimization for nonconvex sparse learning,”

in Proceedings of the International Conference on Machine Learning, 2016, pp. 917–925.
• [18] Prateek Jain, Ambuj Tewari, and Purushottam Kar, “On iterative hard thresholding methods for high-dimensional m-estimation,” in Advances in Neural Information Processing Systems, 2014, pp. 685–693.
• [19] Mark Rudelson and Shuheng Zhou, “Reconstruction from anisotropic random measurements,” Ann Arbor, vol. 1001, pp. 48109, 2011.
• [20] Roman Vershynin, “Introduction to the non-asymptotic analysis of random matrices,” arXiv preprint arXiv:1011.3027, 2010.
• [21] Martin J Wainwright, “Sharp thresholds for high-dimensional and noisy recovery of sparsity using l1-constrained quadratic programming,” IEEE Transactions on Information Theory, 2009.
• [22] Sahand N Negahban, Pradeep Ravikumar, Martin J Wainwright, and Bin Yu, “A unified framework for high dimensional analysis of m-estimators with decomposable regularizers.,” Statistical Science, vol. 27, no. 4, pp. 538–557, 2012.