Federated Nonconvex Sparse Learning

12/31/2020 ∙ by Qianqian Tong, et al. ∙ University of Connecticut 0

Nonconvex sparse learning plays an essential role in many areas, such as signal processing and deep network compression. Iterative hard thresholding (IHT) methods are the state-of-the-art for nonconvex sparse learning due to their capability of recovering true support and scalability with large datasets. Theoretical analysis of IHT is currently based on centralized IID data. In realistic large-scale situations, however, data are distributed, hardly IID, and private to local edge computing devices. It is thus necessary to examine the property of IHT in federated settings, which update in parallel on local devices and communicate with a central server only once in a while without sharing local data. In this paper, we propose two IHT methods: Federated Hard Thresholding (Fed-HT) and Federated Iterative Hard Thresholding (FedIter-HT). We prove that both algorithms enjoy a linear convergence rate and have strong guarantees to recover the optimal sparse estimator, similar to traditional IHT methods, but now with decentralized non-IID data. Empirical results demonstrate that the Fed-HT and FedIter-HT outperform their competitor - a distributed IHT, in terms of decreasing the objective values with lower requirements on communication rounds and bandwidth.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Federated learning is a privacy-preserving learning framework for large scale machine learning on edge computing devices, and solves the data-decentralized optimization problem:

(1)

where

is the loss function of the

client (or device) with weight , , is the distribution of data located locally on the client, and is the total number of clients. Federated learning enables numerous clients to coordinately train a model parameterized by

, while keeping their own data locally, rather than sharing them to the central server. Due to the high communication cost, the mini-batch stochastic gradient descent (SGD) has not been the choice for federated learning. The FedAvg algorithm was proposed in

[21], which can significantly reduce the communication cost by running multiple local SGD steps, and had become the de facto federated learning method. Later, the client drift problem was observed for FedAvg [6, 9, 28], and the FedProx algorithm came to exist [16] in which the individual clients attempt to add a proximal operator to the local subproblem to address the issue of FedAvg.

In federated learning, many clients can work collaboratively without sharing local private data mutually or to a central server [13, 12]. The clients can be heterogeneous edge computing devices such as phones, personal computers, network sensors, or other computing resources. During training, every device maintains its own raw data and only shares the updated model parameters to a central server. Comparing with the well-studied distributed learning, the federated learning setting is more practical in real life and has three major differences: 1) the communication between clients and/or a central server can be slow, which requires the new sparse learning algorithms to be communication-efficient; 2) the distributions of training data over devices can be non-independent and non-identical (non-IID), i.e., for , and are very different; 3) the devices are presumably unbalanced in the capability of curating data, which means that some clients may have more local data than others.

When sparse learning becomes distributed and uses data collected by the distributed devices, the local datasets can be sensitive to share during the construction of a sparse inference model. For instance, meta-analyses may integrate genomic data from a large number of labs to identify (a sparse set of) genes contributing to the risk of a disease without sharing data across the labs [30, 10]. Smartphone-based healthcare systems may need to learn the most important mobile health indicators from a large number of users, but personal health data collected on the phone are private [14]. Because of the parameter sparsity, communication cost can be less than learning with dense parameters. However, the SGD algorithm, widely used to train deep neural nets, may not be suitable because the stochastic gradients can be dense during the training process. Thus, communication efficiency is still the main challenge to deploy sparse learning. For example, the signal processing community has been hunting for more communication-efficient algorithms, due to the constraints on power and bandwidth of various sensors [27]. It is necessary and beneficial to examine the sparsity-constrained empirical risk minimization problem with decentralized data as follows:

(2)

where and are defined as in (1), denotes the

-norm of a vector

which computes the number of nonzero entries in , and is the sparsity level pre-specified for . Communication-efficient algorithms for solving (2

) can be pivotal and generally useful in decentralized high-dimensional data analyses

[4, 29, 1, 8].

Even without the decentralized-data consideration, finding a solution to (2) is already NP-hard because of the non-convexity and non-smoothness of the cardinality constraint [22]. Extensive research has been done for nonconvex sparse learning when training data can be centralized. The methods largely fall into the regimes of either matching pursuit methods [20, 25, 23, 5] or iterative hard thresholding (IHT) methods [2, 7, 24]. Even though matching pursuit methods achieve remarkable success in minimizing quadratic loss functions (such as the

-constrained linear regression problems), they require to find an optimal solution to

argmin over the identified support after hard thresholding at each iteration, which does not have analytical solutions for an arbitrary loss, and can be time-consuming [1]. Hence, iterative gradient-based HT methods have gained significant interests and become popular for nonconvex sparse learning.

Iterative hard thresholding methods include the gradient descent HT (GD-HT) [7], stochastic gradient descent HT (SGD-HT) [24], hybrid stochastic gradient HT (HSG-HT) [33]

, and stochastic variance reduced gradient HT (SVRG-HT)

[18] methods. These methods update the iterate as follows: where is the learning rate, can be the full gradient, stochastic gradient or variance reduced gradient at the iteration, and denotes the HT operator that preserves the top elements in and sets other elements to . All these centralized iterative HT algorithms can be extended to their distributed version - Distributed IHT (see Supplementary for detail), in which the central server aggregates (averages) the local parameter updates from each client and broadcasts the latest model parameter to individual clients, whereas each client updates the parameters based on the distributed local data and sends back to the central server. The central server is also in charge of randomly partitioning the training data and distributing them to different clients. Existing theoretical analysis of gradient-based IHT methods can also be applied to the Distributed IHT, but not suitable to analyze IHT in the federated learning setting with the above three differences.

Distributed sparse learning algorithm has been proposed by [31], which tries to solve a relaxed -norm regularized problem and thus introduce extra bias to (2). Even though the variants of the Distributed IHT, such as [26] and [3], have been proposed, they are communication expensive and suffer from bandwidth limits, since information needs to exchange at each iteration. Asynchronous parallel SVRG-HT in shared memory also has been proposed in [18], which cannot be applied in our scenario. We hence propose federated HT algorithms, which enjoy lower communication costs.

Our Main Contributions are summarized as follows.

(a) We develop two communication-efficient schemes for the federated HT method: the Federated Hard Thresholding (Fed-HT) algorithm, which applies the HT operator only at the central server right before distributing the aggregated parameter to clients; and the Federated Iterative Hard Thresholding (FedIter-HT) algorithm, which applies to both local updates and the central server aggregate. This is the first trial to apply HT algorithms under federated learning settings.

(b) We provide the first set of theoretical results for the federated HT method, particularly of Fed-HT and FedIter-HT, under the condition of non-IID data. We prove that both algorithms enjoy a linear convergence rate and have a strong guarantee for sparsity recovery.

In particular, Theorems 3.1 (for the Fed-HT) and 4.1 (for the FedIter-HT) show that the estimation error between the algorithm iterate and the optimal , is upper bounded as: where is the initial guess of the solution, the convergence rate factor is related to the algorithm parameter (the number of SGD steps on each device before communication) and the closeness between the pre-specified sparsity level and the true sparsity , and determines a statistical bias term that is related not only to but also to the gradient of at the sparse solution and the measurement of the non-IIDness of the data across the devices. The theoretical results help us examine and compare our proposed algorithms. For instance, higher non-IIDness across clients causes a larger bias for both algorithms. More local iterations may decrease but increase the statistical bias. The statistical bias induced by the FedIter-HT in Theorem 4.1 matches the best known upper bound for traditional IHT methods [33]

. Thus, for more concrete formulations of the sparse learning problem, such as sparse linear regression and sparse logistic regression, we also provide statistical analysis of their maximum likelihood estimators (M-estimators) when using the FedIter-HT to solve them.

(c) Extensive experiments in simulations and on real-life datasets demonstrate the effectiveness of the proposed algorithms over standard distributed learning. The experiments on real-life data also show that the extra noise introduced by decentralized non-IID data may actually help the federated sparse learning converge to a better local optimizer.

Ii Preliminaries

the total number, the index of clients/devices
the weight of each loss function on client
the total number, the index of communication rounds
the total number, the index of local iterations
the full gradient
the stochastic gradient over the minibatch
the stochastic gradient over a training example
indexed by on the -th device
the stepsize/learning rate of local update
an indicator function
the support of or the index set of non-zero elements in
the optimal solution to (2)
the local parameter vector on device
at the -th iteration of the -th round
the required sparsity level
the optimal sparsity level to (2),
the projector takes only the elements of indexed in
, the expectation over stochasticity across all clients
and of client respectively
TABLE I: Brief summary of notations in this paper

We formalize our problem as (2), and give notations, assumptions and prepared lemmas used in this paper. We denote vectors by lowercase letters, e.g. , the -norm and the -norm of a vector by and , respectively. The model parameters form a vector . Let represent the asymptotic upper bound, be the integer set . The support , is associated with the -th iteration in the -th round on device . For simplicity, we use , throughout the paper without ambiguity, and .

We use the same conditions employed in the theoretical analysis of other IHT methods by assuming that the objective function satisfies the following conditions:

Assumption 1.

We assume that the loss function on each device

  1. is restricted -strongly convex at the sparsity level for a given , i.e., there exists a constant such that with , , we have

  2. is restricted -strongly smooth at the sparsity level for a given , i.e., there exists a constant such that with , , we have

  3. has -bounded stochastic gradient variance, i.e.,

Remark 1.

When , the above assumption is no longer restricted to the support at a sparsity level, and is actually -strongly convex and -strongly smooth.

Following the same convention in federated learning [16, 9], we also assume the dissimilarity between the gradients of the local functions and the global function is bounded as follows.

Assumption 2.

The functions () are -locally dissimilar , i.e. there exists a constant , such that

for any .

From the assumptions mentioned in the main text, we have the following prepared lemmas to get ready for our theorems.

Lemma 2.1.

([17]) For and for any parameter , we have

where and .

Lemma 2.2.

A differentiable convex function is restricted -strongly smooth with parameter s, i.e. there exists a generic constant such that for any , with and

then we have:

This is also true for the global smoothness parameter .

Iii The Fed-HT Algorithm

In this section, we first describe our first new federated sparse learning framework via hard thresholding - Fed-HT, and then discuss the convergence rate of the Fed-HT.

A high level summary of Fed-HT is described in Algorithm 1. The Fed-HT generates a sequence of sparse vectors , , , from an initial sparse approximation . At the -th round, clients receive the global parameter update from the central server, then run steps of minibatch SGD based on local private data. In each step, the client updates for ; Clients send for back to the central server; Then the server averages them to obtain a dense global parameter vector and apply the HT operator to obtain a sparse iterate . Compared with the commonly used FedAvg, the Fed-HT can largely reduce the communication cost because the central server broadcasts a sparse iterate at each of the rounds.

Input: The learning rate , the sparsity level , and the number of clients .
Initialize
for  to  do
     for client to parallel do
         
         for  to  do
              Sample uniformly a batch
              
              
         end for
     end for
     Exact-Average:
end for
Algorithm 1 Federated Hard Thresholding (Fed-HT)

The following theorem characterizes our theoretical analysis of Fed-HT in terms of its parameter estimation accuracy for sparsity-constrained problems. Although this paper is focused on the cardinality constraint, the theoretical result is applicable to other sparsity constraints, such as a constraint based on matrix rank. Then, we have the main theorem and the detailed proof can be found in Appendix.

Theorem 3.1.

Let be the optimal solution to (2), , and suppose satisfies Assumptions 1 and 2. The condition number . Let stepsize and the batch size , , , the sparsity level . Then the following inequality holds for the Fed-HT:

where , , , , and .

Note that if the sparse solution is sufficiently close to an unconstrained minimizer of , then is small, so the first exponential term on the right-hand side can be a dominating term which approaches to when goes to infinity.

Corollary 3.1.1.

If all the conditions in Theorem 3.1 hold, for a given precision , we need at most rounds to obtain

where .

Corollary 3.1.1 indicates that under proper conditions and with sufficient rounds, the estimation error of the Fed-HT is determined by the second term - the statistical bias term - which we denote as . The term can become small if is sufficiently close to an unconstrained minimizer of , so it represents the sparsity-induced bias to the solution of the unconstrained optimization problem. The upper bound result guarantees that the Fed-HT can approach arbitrarily closely under a sparsity-induced bias, and the speed of approaching to the biased solution is linear (or geometric) and determined by . In Theorem 3.1 and Corollary 3.1.1, is closely related to the number of local updates . The condition number , so . When is larger, is smaller, so is the number of rounds required for reaching a target . In other words, the Fed-HT converges faster with fewer communication rounds. However, the bias term will increase when increases. Therefore, should be chosen to balance the convergence rate and statistical bias.

We further investigate how the objective function approaches to the optimal in the following corollary. Detailed proof can be found in supplemental material111 Supplementary material: https://www.dropbox.com/sh/c75nni6uc5fzd70/AADpB6QoPR0sxPFqO_No-sXKa?dl=0.

Corollary 3.1.2.

If all the conditions in Theorem 3.1 hold, let , and , we have

Because the local updates on each device are based on stochastic gradient descent with dense parameter, without hard thresholding operator, -smoothness and -strongly convexity are required, which are stronger requirements for . What’s more, , which means and are , which are suboptimal, comparing with results for traditional IHT methods, in terms of dimension . In order to solve such drawbacks, we develop a new algorithm in next section.

Iv The FedIter-HT Algorithm

If we apply the HT operator to each local update as well, we obtain the FedIter-HT algorithm as described in Algorithm 2. Hence, the local update on each device performs multiple SGD-HT steps, which further reduces the communication cost because model parameters sent back from clients to the central server are also sparse. If a client has a communication bandwidth so small that it can not effectively pass the full set of parameters, the FedIter-HT provides a good solution.

Input: The learning rate , the sparsity level , and the number of clients .
Initialize
for  to  do
     for client to parallel do
         
         for  to  do
              Sample uniformly a batch
              
              
         end for
     end for
     Exact-Average:
end for
Algorithm 2 Federated Iterative Hard Thresholding (FedIter-HT)

We again examine the convergence of the FedIter-HT by developing an upper bound on the distance between the estimator and the optimal , i.e. in the following theorem. Then, the detailed proof can be found in supplemental material.

Theorem 4.1.

Let be the optimal solution to (2), , and suppose satisfies Assumptions 1 and 2. The condition number . Let stepsize and the batch size , , , the sparsity level . Then the following inequality holds for the FedIter-HT:

where , , , , , and .

The factor , compared with in Theorem 3.1, is smaller if , which means that the FedIter-HT converges faster than the Fed-HT when the beforehand-guessed sparsity is much larger than the true sparsity. Both and will decrease when the number of internal iterations increases, but decreases faster than because is smaller than . Thus, the FedIter-HT is more likely to benefit by increasing than the Fed-HT. The statistical bias term can be much smaller than in Theorem 3.1 because only depends on the norm of restricted to the support of size . Because the norm of the gradient is a dominating term in and , slightly increasing does not vary much the statistical bias terms (when ).

Using the results in Theorem 4.1, we can further derive Corollary 4.1.1 to specify the number of rounds required to achieve a given estimation precision.

Corollary 4.1.1.

If all the conditions in Theorem 4.1 hold, for a given , the FedIter-HT requires the most rounds to obtain

where .

Because , and we also know and

in high dimensional statistical problems, the result in Corollary

4.1.1 gives a tighter bound than the one obtained in Corollary 3.1.1. Similarly, we also obtain a tighter upper bound for the convergence performance of the objective function .

Corollary 4.1.2.

If all the conditions in Theorem 4.1 hold, let , and , we have

The theorem and corollaries developed in this section only depend on the -restricted smoothness and -restricted strong convexity, where , which are the same conditions used in the analysis of existing IHT methods. Moreover, , which means and are , where is the size of support ; Therefore, our results match the current best known upper bound for the statistic bias term, comparing with the results for traditional IHT methods.

Iv-a Statistical analysis for M-estimators

Because of the good property of the FedIter-HT, we also demonstrate the theory of constrained M-estimators obtained on more concrete learning formulations. Although we focus on the sparse linear regression and sparse logistic regression in this paper, our method can be used to analyze other statistical learning problems as well.

Sparse Linear Regression. We consider the linear regression problem in high-dimensional regime:

where is a design matrix associated with client . For each row of matrix

, we further assume that they are independently drawn from a sub-Gaussian distribution with parameter

, denotes the response vector, and

is a noise vector following Normal distribution

, with is the underlying sparse regression coefficient vector.

Corollary 4.1.3.

If all the conditions in Theorem 4.1 hold, with and a sufficiently large number of communication rounds , we have

with probability at least

, where is a universal constant.

Proof Sketch: First, we are able to show that is restricted -strongly convex and restricted -strongly smooth with and respectively with probability at least if the sample size , where and are constants. Secondly, we know that , with probability at least , where and are constants irrelevant to the model parameters. Let the number of rounds be sufficiently large such that the term in Theorem 4.1 is sufficiently small. Gathering everything together and putting them into the statistical bias term yield the above bound with a high probability. ∎

Sparse Logistic Regression. We consider the following optimization problem for logistic regression:

where for is a predictive vector and drawn from a sub-Gaussian distribution associated with client , each observation on client

is drawn from the Bernoulli distribution

, and with is the underlying true parameter that we want to recover.

Corollary 4.1.4.

If all the conditions in Theorem 4.1 hold, , for and and and with a sufficiently large number of communication rounds , we have

with probability at least , where , and are constants.

Proof Sketch: We have the above result for sparse logistic regression, if we follow the similar argument to that in Corollary 4.1.3, except that we have and with a probability at least if , and with a probability at least , where , , for are some constants irrelevant to model parameters.∎

V Experiments

We empirically evaluate our methods in both simulations and in the analysis of three real-world datasets (E2006-tfidf, RCV1 and MNIST, see Figure

1, 2 and Table II, which are downloaded from the LibSVM website222http://www.csie.ntu.edu.tw/ cjlin/libsvmtools/datasets/), and compare them against a baseline method. The baseline method is a standard Distributed IHT and communicates every local update to the central server, which then aggregates and broadcasts back to clients (see Supplementary for more detail). Specifically, experiments for simulation I and the E2006-tfidf dataset are done for sparse linear regression. In simulation II and for the RCV1 dataset, we solve the sparse logistic regression problem. The last experiment uses MNIST data in a multi-class softmax regression problem. The detailed loss functions for the different problems can be found in Supplementary.

We use Distributed-IHT as a baseline. Following the convention in the Federated Learning literature, we use the number of communication rounds to measure the communication cost. For a comprehensive comparison, we also include the number of iterations. For both synthetic and real-world datasets, parameters, such as local iterations , stepsize , are determined by the following criteria. The number of local iterations is searched from . The stepsize for each algorithm is set by a grid search from . All the algorithms are initialized with . The sparsity is 500 for MNIST dataset and 200 for all others.

Dataset Samples dimension Samples/device
mean stdev
E2006-tfidf 3,308 150,360 33.8 9.1
RCV1 20,242 47,236 202.4 114.5
MNIST 60,000 784 600
TABLE II: Statistics of three real federated datasets.

V-a Simulations

To generate synthetic data, we follow a similar setup in [16]. In simulation I, for each device , we generate samples for according to , where , . The first 100 elements of are IID drawn from and the remaining elements in are zeros, , , , where is diagonal matrix with the -th diagonal element equal to . Each element in the mean vector is drawn from , . Therefore, controls how much the local models differ from each other, and controls how much the local on-device data differ between one device or another. In simulation I, and . The data generation procedure for simulation II is the same as the procedure of simulation I, except that , then for the -th client, we set corresponding to the top 100 of for , otherwise . In simulation II, we set and .

Fig. 1:

Visualization of labeling with K-means clustering for E2006

Fig. 2: Visualization of labeling with K-means clustering for RCV1

V-B Benchmark Datasets

Fig. 3: The objective function value vs. communication rounds for regression (a, b) and classification (c, d), and for Fed-HT (a, c) and FedIter-HT (b, d) with varying values of and stepsize/learning rate (lr) .
Fig. 4: The comparison of different algorithms in terms of the objective function value vs. communication rounds (a, c) and vs. all internal iterations (b, d), and for regression (a, b) and classification (c, d). Note that the distributed IHT is the baseline method that communicates every local update (so the number of rounds equals the number of iterations) and may be the best scenario for reducing the objective value. We observe that in simulation I, the Fed-HT and FedIter-HT only need, respectively, 60 ( less) and 20 ( less) communication rounds to reach the same objective value that the Distributed-IHT takes 100 rounds; in simulation II, the FedIter-HT needs 50 communication rounds ( less) to achieve the same objective value that the Distributed-IHT takes 200 rounds. Although the proposed methods use more internal iterations in (b,d) than that of the Distributed-HT, they are at least 1.6 times faster due to the communication efficiency, if we further assume that clients can be anywhere around the world, for which the average network delay is about 150 ms, whereas the local computation may only take 20 us.
Fig. 5: Comparison of the algorithms on different datasets in terms of the objective function value vs. communication rounds (top) and vs. all internal iterations (bottom). is a lower bound of . FedIter-HT performs consistently better across all datasets, which confirms our theoretical result.

We use the E2006-tfidf dataset [11] to predict the volatility of stock returns based on the SEC-mandated financial text report, represented by tf-idf. It was collected from thousands of publicly traded U.S. companies, for which data from different companies are inherently non-identical and the privacy consideration for financial data demands federated learning. The RCV1 dataset [15] is used to predict categories of newswire stories recently collected by Reuters. Ltd. The RCV1 can be naturally partitioned based on news category and used for federated learning experiments, since readers may only be interested in one or two categories of news and the model training process will mimic the personalized privacy-preserving news recommender system, for which reader history is located on a user’s personal devices. For these two datasets, we first run K-means to obtain 10 clusters and use t-SNE to reveal the hidden structures we find with the clustering method. We use the digits to label the MNIST images. Then for all datasets, the data in each category are evenly partitioned into 20 parts, and each client randomly picks 2 categories and selects one part from each of the categories. Because the MNIST images are evenly collected for each digit, the partitioned decentralized MNIST data are balanced in terms of categories, whereas the other two datasets are unbalanced.

Figure 5 (top) shows that the proposed Fed-HT and FedIter-HT can significantly reduce the communication rounds required to achieve a given accuracy, though they take the cost of running additional internal iterations as shown in Figure 5 (bottom). In Figure 5 (a,c), we further observe that federated learning displays more randomness, when approaching to the optimal solution. This may be caused by dissimilarity across clients. For instance, the three different algorithms in Figure 5 (c) reach the neighborhood of different solutions at the end where the proposed FedIter-HT obtains the lowest objective value. These behaviors may be worth further exploring in the future.

Vi Conclusion

In this paper, we propose two communicate-efficient IHT methods - Fed-HT and FedIter-HT - to deal with nonconvex sparse learning with decentralized non-IID data. The Fed-HT algorithm is designed to impose a hard thresholding operator at a central serve, whereas the FedIter-HT applies this operator at each update no matter at local clients or a central server. Both methods reduce communication costs - in both the communication rounds and the communication load at each round. Theoretical analysis shows a linear convergence rate for both of the algorithms where the Fed-HT has a better reduction factor in each iteration but the FedIter-HT has a better statistical estimation bias. Even with the decentralized non-IID data, there is still a guarantee to recover the optimal sparse estimator, in a similar way to the traditional IHT methods with IID data. Empirical results demonstrate that they outperform the standard distributed IHT in simulations and on benchmark datasets.

References

  • [1] S. Bahmani, B. Raj, and P. T. Boufounos (2013) Greedy sparsity-constrained optimization. Journal of Machine Learning Research 14 (Mar), pp. 807–841. Cited by: §I, §I.
  • [2] T. Blumensath and M. E. Davies (2009) Iterative hard thresholding for compressed sensing. Applied and computational harmonic analysis 27 (3), pp. 265–274. Cited by: §I.
  • [3] X. Chen, Z. Qi, and J. Xu (2020) HDIHT: a high-accuracy distributed iterative hard thresholding algorithm for compressed sensing. IEEE Access 8, pp. 49180–49186. Cited by: §I.
  • [4] D. L. Donoho et al. (2006) Compressed sensing. IEEE Transactions on information theory 52 (4), pp. 1289–1306. Cited by: §I.
  • [5] S. Foucart (2011) Hard thresholding pursuit: an algorithm for compressive sensing. SIAM Journal on Numerical Analysis 49 (6), pp. 2543–2563. Cited by: §I.
  • [6] T. H. Hsu, H. Qi, and M. Brown (2019) Measuring the effects of non-identical data distribution for federated visual classification. arXiv preprint arXiv:1909.06335. Cited by: §I.
  • [7] P. Jain, A. Tewari, and P. Kar (2014) On iterative hard thresholding methods for high-dimensional m-estimation. In Advances in Neural Information Processing Systems, pp. 685–693. Cited by: §I, §I.
  • [8] A. Jalali, C. C. Johnson, and P. K. Ravikumar (2011) On learning discrete graphical models using greedy methods. In Advances in Neural Information Processing Systems, pp. 1935–1943. Cited by: §I.
  • [9] S. P. Karimireddy, S. Kale, M. Mohri, S. J. Reddi, S. U. Stich, and A. T. Suresh (2019) SCAFFOLD: stochastic controlled averaging for on-device federated learning. arXiv preprint arXiv:1910.06378. Cited by: §I, §II.
  • [10] F. K. Kavvoura and J. P. Ioannidis (2008) Methods for meta-analysis in genetic association studies: a review of their potential and pitfalls. Human genetics 123 (1), pp. 1–14. Cited by: §I.
  • [11] S. Kogan, D. Levin, B. R. Routledge, J. S. Sagi, and N. A. Smith (2009) Predicting risk from financial reports with regression. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pp. 272–280. Cited by: §V-B.
  • [12] J. Konečnỳ, H. B. McMahan, D. Ramage, and P. Richtárik (2016) Federated optimization: distributed machine learning for on-device intelligence. arXiv preprint arXiv:1610.02527. Cited by: §I.
  • [13] J. Konečnỳ, H. B. McMahan, F. X. Yu, P. Richtárik, A. T. Suresh, and D. Bacon (2016) Federated learning: strategies for improving communication efficiency. arXiv preprint arXiv:1610.05492. Cited by: §I.
  • [14] Y. Lee, W. S. Jeong, and G. Yoon (2012) Smartphone-based mobile health monitoring. Telemedicine and e-Health 18 (8), pp. 585–590. Cited by: §I.
  • [15] D. D. Lewis, Y. Yang, T. G. Rose, and F. Li (2004) Rcv1: a new benchmark collection for text categorization research. Journal of machine learning research 5 (Apr), pp. 361–397. Cited by: §V-B.
  • [16] T. Li, A. K. Sahu, M. Zaheer, M. Sanjabi, A. Talwalkar, and V. Smith (2018) Federated optimization in heterogeneous networks. arXiv preprint arXiv:1812.06127. Cited by: §I, §II, §V-A.
  • [17] X. Li, R. Arora, H. Liu, J. Haupt, and T. Zhao (2016) Nonconvex sparse learning via stochastic optimization with progressive variance reduction. arXiv preprint arXiv:1605.02711. Cited by: Lemma 2.1.
  • [18] X. Li, T. Zhao, R. Arora, H. Liu, and J. Haupt (2016) Stochastic variance reduced optimization for nonconvex sparse learning. In International Conference on Machine Learning, pp. 917–925. Cited by: §I, §I.
  • [19] P. Loh and M. J. Wainwright (2015) Regularized m-estimators with nonconvexity: statistical and algorithmic theory for local optima. The Journal of Machine Learning Research 16 (1), pp. 559–616. Cited by: §B-F.
  • [20] S. G. Mallat and Z. Zhang (1993) Matching pursuits with time-frequency dictionaries. IEEE Transactions on signal processing 41 (12), pp. 3397–3415. Cited by: §I.
  • [21] H. B. McMahan, E. Moore, D. Ramage, S. Hampson, et al. (2016) Communication-efficient learning of deep networks from decentralized data. arXiv preprint arXiv:1602.05629. Cited by: §I.
  • [22] B. K. Natarajan (1995) Sparse approximate solutions to linear systems. SIAM journal on computing 24 (2), pp. 227–234. Cited by: §I.
  • [23] D. Needell and J. A. Tropp (2009) CoSaMP: iterative signal recovery from incomplete and inaccurate samples. Applied and computational harmonic analysis 26 (3), pp. 301–321. Cited by: §I.
  • [24] N. Nguyen, D. Needell, and T. Woolf (2017) Linear convergence of stochastic iterative greedy algorithms with sparse constraints. IEEE Transactions on Information Theory 63 (11), pp. 6869–6895. Cited by: §I, §I.
  • [25] Y. C. Pati, R. Rezaiifar, and P. S. Krishnaprasad (1993) Orthogonal matching pursuit: recursive function approximation with applications to wavelet decomposition. In Proceedings of 27th Asilomar conference on signals, systems and computers, pp. 40–44. Cited by: §I.
  • [26] S. Patterson, Y. C. Eldar, and I. Keidar (2014) Distributed compressed sensing for static and time-varying networks. IEEE Transactions on Signal Processing 62 (19), pp. 4931–4946. Cited by: §I.
  • [27] Z. Qin, J. Fan, Y. Liu, Y. Gao, and G. Y. Li (2018) Sparse representation for wireless communications: a compressive sensing approach. IEEE Signal Processing Magazine 35 (3), pp. 40–58. Cited by: §I.
  • [28] S. Reddi, Z. Charles, M. Zaheer, Z. Garrett, K. Rush, J. Konečnỳ, S. Kumar, and H. B. McMahan (2020) Adaptive federated optimization. arXiv preprint arXiv:2003.00295. Cited by: §I.
  • [29] J. A. Tropp and A. C. Gilbert (2007) Signal recovery from random measurements via orthogonal matching pursuit. IEEE Transactions on information theory 53 (12), pp. 4655–4666. Cited by: §I.
  • [30] D. Wahlsten, P. Metten, T. J. Phillips, S. L. Boehm, S. Burkhart-Kasch, J. Dorow, S. Doerksen, C. Downing, J. Fogarty, K. Rodd-Henricks, et al. (2003) Different data from different labs: lessons from studies of gene–environment interaction. Journal of neurobiology 54 (1), pp. 283–311. Cited by: §I.
  • [31] J. Wang, M. Kolar, N. Srebro, and T. Zhang (2017) Efficient distributed learning with sparsity. In International Conference on Machine Learning, pp. 3636–3645. Cited by: §I.
  • [32] L. Wang and Q. Gu (2019) Differentially private iterative gradient hard thresholding for sparse learning. In

    28th International Joint Conference on Artificial Intelligence

    ,
    Cited by: §B-E.
  • [33] P. Zhou, X. Yuan, and J. Feng (2018) Efficient stochastic gradient hard thresholding. In Advances in Neural Information Processing Systems, pp. 1988–1997. Cited by: §I, §I.

Appendix A Distributed IHT Algorithm

Input: Learning rate , number of workers .
Initialize
for  to  do
     for worker to parallel do
         Receive from the central server
         Calculate unbiased stochastic gradient direction on worker
         Locally update:
         Send to the central server
     end for
     Receive all local updates and average on remote server:
end for
Algorithm 3 Distributed-IHT

Appendix B More Experiment Details

In more detail, experiments for simulation I and real data E2006-tfidf dataset are done with sparse linear regression,

Experiments for simulation II and real data RCV1 are done with sparse logistic regression

The last experiment is for MNIST data with multi-class softmax regression problem as follows:

Lemma 2.1.

A differentiable convex function is restricted -strongly smooth with parameter s, i.e. there exists a generic constant such that for any , with and