I Introduction
Federated learning is a privacypreserving learning framework for large scale machine learning on edge computing devices, and solves the datadecentralized optimization problem:
(1) 
where
is the loss function of the
client (or device) with weight , , is the distribution of data located locally on the client, and is the total number of clients. Federated learning enables numerous clients to coordinately train a model parameterized by, while keeping their own data locally, rather than sharing them to the central server. Due to the high communication cost, the minibatch stochastic gradient descent (SGD) has not been the choice for federated learning. The FedAvg algorithm was proposed in
[21], which can significantly reduce the communication cost by running multiple local SGD steps, and had become the de facto federated learning method. Later, the client drift problem was observed for FedAvg [6, 9, 28], and the FedProx algorithm came to exist [16] in which the individual clients attempt to add a proximal operator to the local subproblem to address the issue of FedAvg.In federated learning, many clients can work collaboratively without sharing local private data mutually or to a central server [13, 12]. The clients can be heterogeneous edge computing devices such as phones, personal computers, network sensors, or other computing resources. During training, every device maintains its own raw data and only shares the updated model parameters to a central server. Comparing with the wellstudied distributed learning, the federated learning setting is more practical in real life and has three major differences: 1) the communication between clients and/or a central server can be slow, which requires the new sparse learning algorithms to be communicationefficient; 2) the distributions of training data over devices can be nonindependent and nonidentical (nonIID), i.e., for , and are very different; 3) the devices are presumably unbalanced in the capability of curating data, which means that some clients may have more local data than others.
When sparse learning becomes distributed and uses data collected by the distributed devices, the local datasets can be sensitive to share during the construction of a sparse inference model. For instance, metaanalyses may integrate genomic data from a large number of labs to identify (a sparse set of) genes contributing to the risk of a disease without sharing data across the labs [30, 10]. Smartphonebased healthcare systems may need to learn the most important mobile health indicators from a large number of users, but personal health data collected on the phone are private [14]. Because of the parameter sparsity, communication cost can be less than learning with dense parameters. However, the SGD algorithm, widely used to train deep neural nets, may not be suitable because the stochastic gradients can be dense during the training process. Thus, communication efficiency is still the main challenge to deploy sparse learning. For example, the signal processing community has been hunting for more communicationefficient algorithms, due to the constraints on power and bandwidth of various sensors [27]. It is necessary and beneficial to examine the sparsityconstrained empirical risk minimization problem with decentralized data as follows:
(2) 
where and are defined as in (1), denotes the
norm of a vector
which computes the number of nonzero entries in , and is the sparsity level prespecified for . Communicationefficient algorithms for solving (2) can be pivotal and generally useful in decentralized highdimensional data analyses
[4, 29, 1, 8].Even without the decentralizeddata consideration, finding a solution to (2) is already NPhard because of the nonconvexity and nonsmoothness of the cardinality constraint [22]. Extensive research has been done for nonconvex sparse learning when training data can be centralized. The methods largely fall into the regimes of either matching pursuit methods [20, 25, 23, 5] or iterative hard thresholding (IHT) methods [2, 7, 24]. Even though matching pursuit methods achieve remarkable success in minimizing quadratic loss functions (such as the
constrained linear regression problems), they require to find an optimal solution to
argmin over the identified support after hard thresholding at each iteration, which does not have analytical solutions for an arbitrary loss, and can be timeconsuming [1]. Hence, iterative gradientbased HT methods have gained significant interests and become popular for nonconvex sparse learning.Iterative hard thresholding methods include the gradient descent HT (GDHT) [7], stochastic gradient descent HT (SGDHT) [24], hybrid stochastic gradient HT (HSGHT) [33]
, and stochastic variance reduced gradient HT (SVRGHT)
[18] methods. These methods update the iterate as follows: where is the learning rate, can be the full gradient, stochastic gradient or variance reduced gradient at the iteration, and denotes the HT operator that preserves the top elements in and sets other elements to . All these centralized iterative HT algorithms can be extended to their distributed version  Distributed IHT (see Supplementary for detail), in which the central server aggregates (averages) the local parameter updates from each client and broadcasts the latest model parameter to individual clients, whereas each client updates the parameters based on the distributed local data and sends back to the central server. The central server is also in charge of randomly partitioning the training data and distributing them to different clients. Existing theoretical analysis of gradientbased IHT methods can also be applied to the Distributed IHT, but not suitable to analyze IHT in the federated learning setting with the above three differences.Distributed sparse learning algorithm has been proposed by [31], which tries to solve a relaxed norm regularized problem and thus introduce extra bias to (2). Even though the variants of the Distributed IHT, such as [26] and [3], have been proposed, they are communication expensive and suffer from bandwidth limits, since information needs to exchange at each iteration. Asynchronous parallel SVRGHT in shared memory also has been proposed in [18], which cannot be applied in our scenario. We hence propose federated HT algorithms, which enjoy lower communication costs.
Our Main Contributions are summarized as follows.
(a) We develop two communicationefficient schemes for the federated HT method: the Federated Hard Thresholding (FedHT) algorithm, which applies the HT operator only at the central server right before distributing the aggregated parameter to clients; and the Federated Iterative Hard Thresholding (FedIterHT) algorithm, which applies to both local updates and the central server aggregate. This is the first trial to apply HT algorithms under federated learning settings.
(b) We provide the first set of theoretical results for the federated HT method, particularly of FedHT and FedIterHT, under the condition of nonIID data. We prove that both algorithms enjoy a linear convergence rate and have a strong guarantee for sparsity recovery.
In particular, Theorems 3.1 (for the FedHT) and 4.1 (for the FedIterHT) show that the estimation error between the algorithm iterate and the optimal , is upper bounded as: where is the initial guess of the solution, the convergence rate factor is related to the algorithm parameter (the number of SGD steps on each device before communication) and the closeness between the prespecified sparsity level and the true sparsity , and determines a statistical bias term that is related not only to but also to the gradient of at the sparse solution and the measurement of the nonIIDness of the data across the devices. The theoretical results help us examine and compare our proposed algorithms. For instance, higher nonIIDness across clients causes a larger bias for both algorithms. More local iterations may decrease but increase the statistical bias. The statistical bias induced by the FedIterHT in Theorem 4.1 matches the best known upper bound for traditional IHT methods [33]
. Thus, for more concrete formulations of the sparse learning problem, such as sparse linear regression and sparse logistic regression, we also provide statistical analysis of their maximum likelihood estimators (Mestimators) when using the FedIterHT to solve them.
(c) Extensive experiments in simulations and on reallife datasets demonstrate the effectiveness of the proposed algorithms over standard distributed learning. The experiments on reallife data also show that the extra noise introduced by decentralized nonIID data may actually help the federated sparse learning converge to a better local optimizer.
Ii Preliminaries
the total number, the index of clients/devices  
the weight of each loss function on client  
the total number, the index of communication rounds  
the total number, the index of local iterations  
the full gradient  
the stochastic gradient over the minibatch  
the stochastic gradient over a training example  
indexed by on the th device  
the stepsize/learning rate of local update  
an indicator function  
the support of or the index set of nonzero elements in  
the optimal solution to (2)  
the local parameter vector on device  
at the th iteration of the th round  
the required sparsity level  
the optimal sparsity level to (2),  
the projector takes only the elements of indexed in  
,  the expectation over stochasticity across all clients 
and of client respectively 
We formalize our problem as (2), and give notations, assumptions and prepared lemmas used in this paper. We denote vectors by lowercase letters, e.g. , the norm and the norm of a vector by and , respectively. The model parameters form a vector . Let represent the asymptotic upper bound, be the integer set . The support , is associated with the th iteration in the th round on device . For simplicity, we use , throughout the paper without ambiguity, and .
We use the same conditions employed in the theoretical analysis of other IHT methods by assuming that the objective function satisfies the following conditions:
Assumption 1.
We assume that the loss function on each device

is restricted strongly convex at the sparsity level for a given , i.e., there exists a constant such that with , , we have

is restricted strongly smooth at the sparsity level for a given , i.e., there exists a constant such that with , , we have

has bounded stochastic gradient variance, i.e.,
Remark 1.
When , the above assumption is no longer restricted to the support at a sparsity level, and is actually strongly convex and strongly smooth.
Following the same convention in federated learning [16, 9], we also assume the dissimilarity between the gradients of the local functions and the global function is bounded as follows.
Assumption 2.
The functions () are locally dissimilar , i.e. there exists a constant , such that
for any .
From the assumptions mentioned in the main text, we have the following prepared lemmas to get ready for our theorems.
Lemma 2.1.
Lemma 2.2.
A differentiable convex function is restricted strongly smooth with parameter s, i.e. there exists a generic constant such that for any , with and
then we have:
This is also true for the global smoothness parameter .
Iii The FedHT Algorithm
In this section, we first describe our first new federated sparse learning framework via hard thresholding  FedHT, and then discuss the convergence rate of the FedHT.
A high level summary of FedHT is described in Algorithm 1. The FedHT generates a sequence of sparse vectors , , , from an initial sparse approximation . At the th round, clients receive the global parameter update from the central server, then run steps of minibatch SGD based on local private data. In each step, the client updates for ; Clients send for back to the central server; Then the server averages them to obtain a dense global parameter vector and apply the HT operator to obtain a sparse iterate . Compared with the commonly used FedAvg, the FedHT can largely reduce the communication cost because the central server broadcasts a sparse iterate at each of the rounds.
The following theorem characterizes our theoretical analysis of FedHT in terms of its parameter estimation accuracy for sparsityconstrained problems. Although this paper is focused on the cardinality constraint, the theoretical result is applicable to other sparsity constraints, such as a constraint based on matrix rank. Then, we have the main theorem and the detailed proof can be found in Appendix.
Theorem 3.1.
Let be the optimal solution to (2), , and suppose satisfies Assumptions 1 and 2. The condition number . Let stepsize and the batch size , , , the sparsity level . Then the following inequality holds for the FedHT:
where , , , , and .
Note that if the sparse solution is sufficiently close to an unconstrained minimizer of , then is small, so the first exponential term on the righthand side can be a dominating term which approaches to when goes to infinity.
Corollary 3.1.1.
If all the conditions in Theorem 3.1 hold, for a given precision , we need at most rounds to obtain
where .
Corollary 3.1.1 indicates that under proper conditions and with sufficient rounds, the estimation error of the FedHT is determined by the second term  the statistical bias term  which we denote as . The term can become small if is sufficiently close to an unconstrained minimizer of , so it represents the sparsityinduced bias to the solution of the unconstrained optimization problem. The upper bound result guarantees that the FedHT can approach arbitrarily closely under a sparsityinduced bias, and the speed of approaching to the biased solution is linear (or geometric) and determined by . In Theorem 3.1 and Corollary 3.1.1, is closely related to the number of local updates . The condition number , so . When is larger, is smaller, so is the number of rounds required for reaching a target . In other words, the FedHT converges faster with fewer communication rounds. However, the bias term will increase when increases. Therefore, should be chosen to balance the convergence rate and statistical bias.
We further investigate how the objective function approaches to the optimal in the following corollary. Detailed proof can be found in supplemental material^{1}^{1}1 Supplementary material: https://www.dropbox.com/sh/c75nni6uc5fzd70/AADpB6QoPR0sxPFqO_NosXKa?dl=0.
Corollary 3.1.2.
If all the conditions in Theorem 3.1 hold, let , and , we have
Because the local updates on each device are based on stochastic gradient descent with dense parameter, without hard thresholding operator, smoothness and strongly convexity are required, which are stronger requirements for . What’s more, , which means and are , which are suboptimal, comparing with results for traditional IHT methods, in terms of dimension . In order to solve such drawbacks, we develop a new algorithm in next section.
Iv The FedIterHT Algorithm
If we apply the HT operator to each local update as well, we obtain the FedIterHT algorithm as described in Algorithm 2. Hence, the local update on each device performs multiple SGDHT steps, which further reduces the communication cost because model parameters sent back from clients to the central server are also sparse. If a client has a communication bandwidth so small that it can not effectively pass the full set of parameters, the FedIterHT provides a good solution.
We again examine the convergence of the FedIterHT by developing an upper bound on the distance between the estimator and the optimal , i.e. in the following theorem. Then, the detailed proof can be found in supplemental material.
Theorem 4.1.
Let be the optimal solution to (2), , and suppose satisfies Assumptions 1 and 2. The condition number . Let stepsize and the batch size , , , the sparsity level . Then the following inequality holds for the FedIterHT:
where , , , , , and .
The factor , compared with in Theorem 3.1, is smaller if , which means that the FedIterHT converges faster than the FedHT when the beforehandguessed sparsity is much larger than the true sparsity. Both and will decrease when the number of internal iterations increases, but decreases faster than because is smaller than . Thus, the FedIterHT is more likely to benefit by increasing than the FedHT. The statistical bias term can be much smaller than in Theorem 3.1 because only depends on the norm of restricted to the support of size . Because the norm of the gradient is a dominating term in and , slightly increasing does not vary much the statistical bias terms (when ).
Using the results in Theorem 4.1, we can further derive Corollary 4.1.1 to specify the number of rounds required to achieve a given estimation precision.
Corollary 4.1.1.
If all the conditions in Theorem 4.1 hold, for a given , the FedIterHT requires the most rounds to obtain
where .
Because , and we also know and
in high dimensional statistical problems, the result in Corollary
4.1.1 gives a tighter bound than the one obtained in Corollary 3.1.1. Similarly, we also obtain a tighter upper bound for the convergence performance of the objective function .Corollary 4.1.2.
If all the conditions in Theorem 4.1 hold, let , and , we have
The theorem and corollaries developed in this section only depend on the restricted smoothness and restricted strong convexity, where , which are the same conditions used in the analysis of existing IHT methods. Moreover, , which means and are , where is the size of support ; Therefore, our results match the current best known upper bound for the statistic bias term, comparing with the results for traditional IHT methods.
Iva Statistical analysis for Mestimators
Because of the good property of the FedIterHT, we also demonstrate the theory of constrained Mestimators obtained on more concrete learning formulations. Although we focus on the sparse linear regression and sparse logistic regression in this paper, our method can be used to analyze other statistical learning problems as well.
Sparse Linear Regression. We consider the linear regression problem in highdimensional regime:
where is a design matrix associated with client . For each row of matrix
, we further assume that they are independently drawn from a subGaussian distribution with parameter
, denotes the response vector, andis a noise vector following Normal distribution
, with is the underlying sparse regression coefficient vector.Corollary 4.1.3.
If all the conditions in Theorem 4.1 hold, with and a sufficiently large number of communication rounds , we have
with probability at least
, where is a universal constant.Proof Sketch: First, we are able to show that is restricted strongly convex and restricted strongly smooth with and respectively with probability at least if the sample size , where and are constants. Secondly, we know that , with probability at least , where and are constants irrelevant to the model parameters. Let the number of rounds be sufficiently large such that the term in Theorem 4.1 is sufficiently small. Gathering everything together and putting them into the statistical bias term yield the above bound with a high probability. ∎
Sparse Logistic Regression. We consider the following optimization problem for logistic regression:
where for is a predictive vector and drawn from a subGaussian distribution associated with client , each observation on client
is drawn from the Bernoulli distribution
, and with is the underlying true parameter that we want to recover.Corollary 4.1.4.
If all the conditions in Theorem 4.1 hold, , for and and and with a sufficiently large number of communication rounds , we have
with probability at least , where , and are constants.
Proof Sketch: We have the above result for sparse logistic regression, if we follow the similar argument to that in Corollary 4.1.3, except that we have and with a probability at least if , and with a probability at least , where , , for are some constants irrelevant to model parameters.∎
V Experiments
We empirically evaluate our methods in both simulations and in the analysis of three realworld datasets (E2006tfidf, RCV1 and MNIST, see Figure
1, 2 and Table II, which are downloaded from the LibSVM website^{2}^{2}2http://www.csie.ntu.edu.tw/ cjlin/libsvmtools/datasets/), and compare them against a baseline method. The baseline method is a standard Distributed IHT and communicates every local update to the central server, which then aggregates and broadcasts back to clients (see Supplementary for more detail). Specifically, experiments for simulation I and the E2006tfidf dataset are done for sparse linear regression. In simulation II and for the RCV1 dataset, we solve the sparse logistic regression problem. The last experiment uses MNIST data in a multiclass softmax regression problem. The detailed loss functions for the different problems can be found in Supplementary.We use DistributedIHT as a baseline. Following the convention in the Federated Learning literature, we use the number of communication rounds to measure the communication cost. For a comprehensive comparison, we also include the number of iterations. For both synthetic and realworld datasets, parameters, such as local iterations , stepsize , are determined by the following criteria. The number of local iterations is searched from . The stepsize for each algorithm is set by a grid search from . All the algorithms are initialized with . The sparsity is 500 for MNIST dataset and 200 for all others.
Dataset  Samples  dimension  Samples/device  
mean  stdev  
E2006tfidf  3,308  150,360  33.8  9.1 
RCV1  20,242  47,236  202.4  114.5 
MNIST  60,000  784  600  – 
Va Simulations
To generate synthetic data, we follow a similar setup in [16]. In simulation I, for each device , we generate samples for according to , where , . The first 100 elements of are IID drawn from and the remaining elements in are zeros, , , , where is diagonal matrix with the th diagonal element equal to . Each element in the mean vector is drawn from , . Therefore, controls how much the local models differ from each other, and controls how much the local ondevice data differ between one device or another. In simulation I, and . The data generation procedure for simulation II is the same as the procedure of simulation I, except that , then for the th client, we set corresponding to the top 100 of for , otherwise . In simulation II, we set and .
VB Benchmark Datasets
We use the E2006tfidf dataset [11] to predict the volatility of stock returns based on the SECmandated financial text report, represented by tfidf. It was collected from thousands of publicly traded U.S. companies, for which data from different companies are inherently nonidentical and the privacy consideration for financial data demands federated learning. The RCV1 dataset [15] is used to predict categories of newswire stories recently collected by Reuters. Ltd. The RCV1 can be naturally partitioned based on news category and used for federated learning experiments, since readers may only be interested in one or two categories of news and the model training process will mimic the personalized privacypreserving news recommender system, for which reader history is located on a user’s personal devices. For these two datasets, we first run Kmeans to obtain 10 clusters and use tSNE to reveal the hidden structures we find with the clustering method. We use the digits to label the MNIST images. Then for all datasets, the data in each category are evenly partitioned into 20 parts, and each client randomly picks 2 categories and selects one part from each of the categories. Because the MNIST images are evenly collected for each digit, the partitioned decentralized MNIST data are balanced in terms of categories, whereas the other two datasets are unbalanced.
Figure 5 (top) shows that the proposed FedHT and FedIterHT can significantly reduce the communication rounds required to achieve a given accuracy, though they take the cost of running additional internal iterations as shown in Figure 5 (bottom). In Figure 5 (a,c), we further observe that federated learning displays more randomness, when approaching to the optimal solution. This may be caused by dissimilarity across clients. For instance, the three different algorithms in Figure 5 (c) reach the neighborhood of different solutions at the end where the proposed FedIterHT obtains the lowest objective value. These behaviors may be worth further exploring in the future.
Vi Conclusion
In this paper, we propose two communicateefficient IHT methods  FedHT and FedIterHT  to deal with nonconvex sparse learning with decentralized nonIID data. The FedHT algorithm is designed to impose a hard thresholding operator at a central serve, whereas the FedIterHT applies this operator at each update no matter at local clients or a central server. Both methods reduce communication costs  in both the communication rounds and the communication load at each round. Theoretical analysis shows a linear convergence rate for both of the algorithms where the FedHT has a better reduction factor in each iteration but the FedIterHT has a better statistical estimation bias. Even with the decentralized nonIID data, there is still a guarantee to recover the optimal sparse estimator, in a similar way to the traditional IHT methods with IID data. Empirical results demonstrate that they outperform the standard distributed IHT in simulations and on benchmark datasets.
References
 [1] (2013) Greedy sparsityconstrained optimization. Journal of Machine Learning Research 14 (Mar), pp. 807–841. Cited by: §I, §I.
 [2] (2009) Iterative hard thresholding for compressed sensing. Applied and computational harmonic analysis 27 (3), pp. 265–274. Cited by: §I.
 [3] (2020) HDIHT: a highaccuracy distributed iterative hard thresholding algorithm for compressed sensing. IEEE Access 8, pp. 49180–49186. Cited by: §I.
 [4] (2006) Compressed sensing. IEEE Transactions on information theory 52 (4), pp. 1289–1306. Cited by: §I.
 [5] (2011) Hard thresholding pursuit: an algorithm for compressive sensing. SIAM Journal on Numerical Analysis 49 (6), pp. 2543–2563. Cited by: §I.
 [6] (2019) Measuring the effects of nonidentical data distribution for federated visual classification. arXiv preprint arXiv:1909.06335. Cited by: §I.
 [7] (2014) On iterative hard thresholding methods for highdimensional mestimation. In Advances in Neural Information Processing Systems, pp. 685–693. Cited by: §I, §I.
 [8] (2011) On learning discrete graphical models using greedy methods. In Advances in Neural Information Processing Systems, pp. 1935–1943. Cited by: §I.
 [9] (2019) SCAFFOLD: stochastic controlled averaging for ondevice federated learning. arXiv preprint arXiv:1910.06378. Cited by: §I, §II.
 [10] (2008) Methods for metaanalysis in genetic association studies: a review of their potential and pitfalls. Human genetics 123 (1), pp. 1–14. Cited by: §I.
 [11] (2009) Predicting risk from financial reports with regression. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pp. 272–280. Cited by: §VB.
 [12] (2016) Federated optimization: distributed machine learning for ondevice intelligence. arXiv preprint arXiv:1610.02527. Cited by: §I.
 [13] (2016) Federated learning: strategies for improving communication efficiency. arXiv preprint arXiv:1610.05492. Cited by: §I.
 [14] (2012) Smartphonebased mobile health monitoring. Telemedicine and eHealth 18 (8), pp. 585–590. Cited by: §I.
 [15] (2004) Rcv1: a new benchmark collection for text categorization research. Journal of machine learning research 5 (Apr), pp. 361–397. Cited by: §VB.
 [16] (2018) Federated optimization in heterogeneous networks. arXiv preprint arXiv:1812.06127. Cited by: §I, §II, §VA.
 [17] (2016) Nonconvex sparse learning via stochastic optimization with progressive variance reduction. arXiv preprint arXiv:1605.02711. Cited by: Lemma 2.1.
 [18] (2016) Stochastic variance reduced optimization for nonconvex sparse learning. In International Conference on Machine Learning, pp. 917–925. Cited by: §I, §I.
 [19] (2015) Regularized mestimators with nonconvexity: statistical and algorithmic theory for local optima. The Journal of Machine Learning Research 16 (1), pp. 559–616. Cited by: §BF.
 [20] (1993) Matching pursuits with timefrequency dictionaries. IEEE Transactions on signal processing 41 (12), pp. 3397–3415. Cited by: §I.
 [21] (2016) Communicationefficient learning of deep networks from decentralized data. arXiv preprint arXiv:1602.05629. Cited by: §I.
 [22] (1995) Sparse approximate solutions to linear systems. SIAM journal on computing 24 (2), pp. 227–234. Cited by: §I.
 [23] (2009) CoSaMP: iterative signal recovery from incomplete and inaccurate samples. Applied and computational harmonic analysis 26 (3), pp. 301–321. Cited by: §I.
 [24] (2017) Linear convergence of stochastic iterative greedy algorithms with sparse constraints. IEEE Transactions on Information Theory 63 (11), pp. 6869–6895. Cited by: §I, §I.
 [25] (1993) Orthogonal matching pursuit: recursive function approximation with applications to wavelet decomposition. In Proceedings of 27th Asilomar conference on signals, systems and computers, pp. 40–44. Cited by: §I.
 [26] (2014) Distributed compressed sensing for static and timevarying networks. IEEE Transactions on Signal Processing 62 (19), pp. 4931–4946. Cited by: §I.
 [27] (2018) Sparse representation for wireless communications: a compressive sensing approach. IEEE Signal Processing Magazine 35 (3), pp. 40–58. Cited by: §I.
 [28] (2020) Adaptive federated optimization. arXiv preprint arXiv:2003.00295. Cited by: §I.
 [29] (2007) Signal recovery from random measurements via orthogonal matching pursuit. IEEE Transactions on information theory 53 (12), pp. 4655–4666. Cited by: §I.
 [30] (2003) Different data from different labs: lessons from studies of gene–environment interaction. Journal of neurobiology 54 (1), pp. 283–311. Cited by: §I.
 [31] (2017) Efficient distributed learning with sparsity. In International Conference on Machine Learning, pp. 3636–3645. Cited by: §I.

[32]
(2019)
Differentially private iterative gradient hard thresholding for sparse learning.
In
28th International Joint Conference on Artificial Intelligence
, Cited by: §BE.  [33] (2018) Efficient stochastic gradient hard thresholding. In Advances in Neural Information Processing Systems, pp. 1988–1997. Cited by: §I, §I.
Appendix A Distributed IHT Algorithm
Appendix B More Experiment Details
In more detail, experiments for simulation I and real data E2006tfidf dataset are done with sparse linear regression,
Experiments for simulation II and real data RCV1 are done with sparse logistic regression
The last experiment is for MNIST data with multiclass softmax regression problem as follows:
Lemma 2.1.
A differentiable convex function is restricted strongly smooth with parameter s, i.e. there exists a generic constant such that for any , with and
Comments
There are no comments yet.