1 Introduction
Many tasks in computer vision, natural language processing and recommendation systems require learning complex prediction rules from large datasets. As the scale of the datasets in these learning tasks continues to grow, it is crucial to utilize the power of distributed computing and storage. In such largescale distributed systems, robustness and security issues have become a major concern. In particular, individual computing units—known as worker machines—may exhibit abnormal behavior due to crashes, faulty hardware, stalled computation or unreliable communication channels. Security issues are only exacerbated in the socalled
Federated Learning setting, a modern distributed learning paradigm that is more decentralized, and that uses the data owners’ devices (such as mobile phones and personal computers) as worker machines (McMahan and Ramage, 2017, Konečnỳ et al., 2016). Such machines are often more unpredictable, and in particular may be susceptible to malicious and coordinated attacks.Due to the inherent unpredictability of this abnormal (sometimes adversarial) behavior, it is typically modeled as Byzantine failure (Lamport et al., 1982)
, meaning that some worker machines may behave completely arbitrarily and can send any message to the master machine that maintains and updates an estimate of the parameter vector to be learned. Byzantine failures can incur major degradation in learning performance. It is wellknown that standard learning algorithms based on naive aggregation of the workers’ messages can be arbitrarily skewed by a single Byzantinefaulty machine. Even when the messages from Byzantine machines take only moderate values—and hence are difficult to detect—and when the number of such machines is small, the performance loss can still be significant. We demonstrate such an example in our experiments in Section
7.In this paper, we aim to develop distributed statistical learning algorithms that are provably robust against Byzantine failures. While this objective is considered in a few recent works (Feng et al., 2014, Blanchard et al., 2017, Chen et al., 2017), a fundamental problem remains poorly understood, namely the optimal statistical performance of a robust learning algorithm. A learning scheme in which the master machine always outputs zero regardless of the workers’ messages is certainly not affected by Byzantine failures, but it will not return anything statistically useful either. On the other hand, many standard distributed algorithms that achieve good statistical performance in the absence of Byzantine failures, become completely unreliable otherwise. Therefore, a main goal of this work is to understand the following questions: what is the best achievable statistical performance while being Byzantinerobust, and what algorithms achieve this performance?
To formalize this question, we consider a standard statistical setting of empirical risk minimization (ERM). Here data points are sampled independently from some distribution and distributed evenly among machines,
of which are Byzantine. The goal is to learn a parametric model by minimizing some loss function defined by the data. In this statistical setting, one expects that the error in learning the parameter, measured in an appropriate metric, should decrease when the amount of data
becomes larger and the fraction of Byzantine machines becomes smaller. In fact, we can show that, at least for strongly convex problems, no algorithm can achieve an error lower thanregardless of communication costs;^{1}^{1}1Throughout the paper, unless otherwise stated, and hide universal multiplicative constants; and further hide terms that are independent of or logarithmic in see Observation 1 in Section 6. Intuitively, the above error rate is the optimal rate that one should target, as
is the effective standard deviation for each machine with
data points, is the bias effect of Byzantine machines, and is the averaging effect of normal machines. When there are no or few Byzantine machines, we see the usual scaling with the total number of data points; when some machines are Byzantine, their influence remains bounded, and moreover is proportional to . If an algorithm is guaranteed to attain this bound, we are assured that we do not sacrifice the quality of learning when trying to guard against Byzantine failures—we pay a price that is unavoidable, but otherwise we achieve the best possible statistical accuracy in the presence of Byzantine failures.Another important consideration for us is communication efficiency. As communication between machines is costly, one cannot simply send all data to the master machine. This constraint precludes direct application of standard robust learning algorithms (such as Mestimators (Huber, 2011)), which assume access to all data. Instead, a desirable algorithm should involve a small number of communication rounds as well as a small amount of data communicated per round. We consider a setting where in each round a worker or master machine can only communicate a vector of size , where is the dimension of the parameter to be learned. In this case, the total communication cost is proportional to the number of communication rounds.
To summarize, we aim to develop distributed learning algorithms that simultaneously achieve two objectives:

Statistical optimality: attain an rate.

Communication efficiency: communication per round, with as few rounds as possible.
To the best of our knowledge, no existing algorithm achieves these two goals simultaneously. In particular, previous robust algorithms either have unclear or suboptimal statistical guarantees, or incur a high communication cost and hence are not applicable in a distributed setting—we discuss related work in more detail in Section 2.
1.1 Our Contributions
We propose two robust distributed gradient descent (GD) algorithms, one based on coordinatewise median, and the other on coordinatewise trimmed mean. We establish their statistical error rates for strongly convex, nonstrongly convex, and nonconvex population loss functions. For strongly convex losses, we show that these algorithms achieve orderoptimal statistical rates under mild conditions. We further propose a medianbased robust algorithm that only requires one communication round, and show that it also achieves the optimal rate for strongly convex quadratic losses. The statistical error rates of these three algorithms are summarized as follows.

Medianbased GD: , orderoptimal for strongly convex loss if .

Trimmedmeanbased GD: , orderoptimal for strongly convex loss.

Medianbased oneround algorithm: , orderoptimal for strongly convex quadratic loss if .
A major technical challenge in our statistical setting here is as follows: the data points are sampled once and fixed, and each worker machine has access to a fixed set of data throughout the learning process. This creates complicated probabilistic dependency across the iterations of the algorithms. Worse yet, the Byzantine machines, which have complete knowledge of the data and the learning algorithm used, may create further unspecified probabilistic dependency. We overcome this difficulty by proving certain uniform bounds via careful covering arguments. Furthermore, for the analysis of medianbased algorithms, we cannot simply adapt standard techniques (such as those in Minsker et al. (2015)), which can only show that the output of the master machine is as accurate as that of one normal machine, leading to a suboptimal rate even without Byzantine failures (). Instead, we make use of a more delicate argument based on normal approximation and BerryEsseentype inequalities, which allows us to achieve the better rates when is small while being robust for a nonzero .
Above we have omitted the dependence on the parameter dimension ; see our main theorems for the precise results. In some settings the rates in these results may not have the optimal dependence on . Understanding the fundamental limits of robust distributed learning in high dimensions, as well as developing algorithms with optimal dimension dependence, is an interesting and important future direction.
1.2 Notation
We denote vectors by boldface lowercase letters such as , and the elements in the vector are denoted by italics letters with subscripts, such as . Matrices are denoted by boldface uppercase letters such as . For any positive integer , we denote the set by . For vectors, we denote the norm and norm by and , respectively. For matrices, we denote the operator norm and the Frobenius norm by and , respectively. We denote by
the cumulative distribution function (CDF) of the standard Gaussian distribution. For any differentiable function
, we denote its partial derivative with respect to the th argument by .2 Related Work
Outlierrobust estimation in nondistributed settings is a classical topic in statistics (Huber, 2011). Particularly relevant to us is the socalled medianofmeans method, in which one partitions the data into subsets, computes an estimate from each subset, and finally takes the median of these estimates. This idea is studied in Nemirovskii et al. (1983), Jerrum et al. (1986), Alon et al. (1999), Lerasle and Oliveira (2011), Minsker et al. (2015), and has been applied to bandit and least square regression problems (Bubeck et al., 2013, Lugosi and Mendelson, 2016, Kogler and Traxler, 2016) as well as problems involving heavytailed distributions (Hsu and Sabato, 2016, Lugosi and Mendelson, 2017). In a very recent work, Minsker and Strawn (2017) provide a new analysis of medianofmeans using a normal approximation. We borrow some techniques from this paper, but need to address a significant harder problem: 1) we deal with the Byzantine setting with arbitrary/adversarial outliers, which is not considered in their paper; 2) we study iterative algorithms for general multidimensional problems with convex and nonconvex losses, while they mainly focus on oneshot algorithms for meanestimationtype problems.
The medianofmeans method is used in the context of Byzantinerobust distributed learning in two recent papers. In particular, the work of Feng et al. (2014) considers a simple oneshot application of medianofmeans, and only proves a suboptimal error rate as mentioned. The work of Chen et al. (2017) considers only strongly convex losses, and seeks to circumvent the above issue by grouping the worker machines into minibatches; however, their rate still falls short of being optimal, and in particular their algorithm fails even when there is only one Byzantine machine in each minibatch.
Other methods have been proposed for Byzantinerobust distributed learning and optimization; e.g., Su and Vaidya (2016a, b). These works consider optimizing fixed functions and do not provide guarantees on statistical error rates. Most relevant is the work by Blanchard et al. (2017)
, who propose to aggregate the gradients from worker machines using a robust procedure. Their optimization setting—which is at the level of stochastic gradient descent and assumes unlimited, independent access to a strong stochastic gradient oracle—is fundamentally different from ours; in particular, they do not provide a characterization of the statistical errors given a fixed number of data points.
Communication efficiency has been studied extensively in nonByzantine distributed settings (McMahan et al., 2016, Yin et al., 2017). An important class of algorithms are based on oneround aggregation methods (Zhang et al., 2012, 2015, Rosenblatt and Nadler, 2016). More sophisticated algorithms have been proposed in order to achieve better accuracy than the oneround approach while maintaining lower communication costs; examples include DANE (Shamir et al., 2014), Disco (Zhang and Lin, 2015), distributed SVRG (Lee et al., 2015) and their variants (Reddi et al., 2016, Wang et al., 2017). Developing Byzantinerobust versions of these algorithms is an interesting future direction.
For outlierrobust estimation in nondistributed settings, much progress has been made recently in terms of improved performance in highdimensional problems (Diakonikolas et al., 2016, Lai et al., 2016, Bhatia et al., 2015) as well as developing listdecodable and semiverified learning schemes when a majority of the data points are adversarial (Charikar et al., 2017). These results are not directly applicable to our distributed setting with general loss functions, but it is nevertheless an interesting future problem to investigate their potential extension for our problem.
3 Problem Setup
In this section, we formally set up our problem and introduce a few concepts key to our the algorithm design and analysis. Suppose that training data points are sampled from some unknown distribution on the sample space . Let be a loss function of a parameter vector associated with the data point , where is the parameter space, and is the corresponding population loss function. Our goal is to learn a model defined by the parameter that minimizes the population loss:
(1) 
The parameter space is assumed to be convex and compact with diameter , i.e., . We consider a distributed computation model with one master machine and worker machines. Each worker machine stores data points, each of which is sampled independently from . Denote by the th data on the th worker machine, and the empirical risk function for the th worker. We assume that an fraction of the worker machines are Byzantine, and the remaining fraction are normal. With the notation , we index the set of worker machines by , and denote the set of Byzantine machines by (thus ). The master machine communicates with the worker machines using some predefined protocol. The Byzantine machines need not obey this protocol and can send arbitrary messages to the master; in particular, they may have complete knowledge of the system and learning algorithms, and can collude with each other.
We introduce the coordinatewise median and trimmed mean operations, which serve as building blocks for our algorithm.
Definition 1 (Coordinatewise median).
For vectors , , the coordinatewise median is a vector with its th coordinate being for each , where is the usual (onedimensional) median.
Definition 2 (Coordinatewise trimmed mean).
For and vectors , , the coordinatewise trimmed mean is a vector with its th coordinate being for each . Here is a subset of obtained by removing the largest and smallest fraction of its elements.
For the analysis, we need several standard definitions concerning random variables/vectors.
Definition 3 (Variance of random vectors).
Definition 4 (Absolute skewness).
For a onedimensional random variable , define its absolute skewness^{2}^{2}2Note the difference with the usual skewness . as . For a dimensional random vector , we define its absolute skewness as the vector of the absolute skewness of each coordinate of , i.e., .
Definition 5 (Subexponential random variables).
A random variable with is called subexponential if .
Finally, we need several standard concepts from convex analysis regarding a differentiable function .
Definition 6 (Lipschitz).
is Lipschitz if .
Definition 7 (Smoothness).
is smooth if .
Definition 8 (Strong convexity).
is strongly convex if .
4 Robust Distributed Gradient Descent
We describe two robust distributed gradient descent algorithms, one based on coordinatewise median and the other on trimmed mean. These two algorithms are formally given in Algorithm 1 as Option I and Option II, respectively, where the symbol represents an arbitrary vector.
In each parallel iteration of the algorithms, the master machine broadcasts the current model parameter to all worker machines. The normal worker machines compute the gradients of their local loss functions and then send the gradients back to the master machine. The Byzantine machines may send any messages of their choices. The master machine then performs a gradient descent update on the model parameter with stepsize , using either the coordinatewise median or trimmed mean of the received gradients. The Euclidean projection ensures that the model parameter stays in the parameter space .
Below we provide statistical guarantees on the error rates of these algorithms, and compare their performance. Throughout we assume that each loss function and the population loss function are smooth:
Assumption 1 (Smoothness of and ).
For any , the partial derivative of with respect to the th coordinate of its first argument, denoted by , is Lipschitz for each , and the function is smooth. Let . Also assume that the population loss function is smooth.
It is easy to see hat . When the dimension of is high, the quantity may be large. However, we will soon see that only appears in the logarithmic factors in our bounds and thus does not have a significant impact.
4.1 Guarantees for Medianbased Gradient Descent
We first consider our medianbased algorithm, namely Algorithm 1 with Option I. We impose the assumptions that the gradient of the loss function has bounded variance, and each coordinate of the gradient has coordinatewise bounded absolute skewness:
Assumption 2 (Bounded variance of gradient).
For any , .
Assumption 3 (Bounded skewness of gradient).
For any , .
These assumptions are satisfied in many learning problems with small values of and
. Below we provide a concrete example in terms of a linear regression problem.
Proposition 1.
Suppose that each data point is generated by with some . Assume that the elements of
are independent and uniformly distributed in
, and that the noise is independent of . With the quadratic loss function , we have , andWe prove Proposition 1 in Appendix A.1. In this example, the upper bound on depends on dimension and the diameter of the parameter space. If the diameter is a constant, we have . Moreover, the gradient skewness is bounded by a universal constant regardless of the size of the parameter space. In Appendix A.2, we provide another example showing that when the features in are i.i.d. Gaussian distributed, the coordinatewise skewness can be upper bounded by .
We now state our main technical results on the medianbased algorithm, namely statistical error guarantees for strongly convex, nonstrongly convex, and smooth nonconvex population loss functions . In the first two cases with a convex , we assume that , the minimizer of in , is also the minimizer of in , i.e., .
Strongly Convex Losses:
We first consider the case where the population loss function is strongly convex. Note that we do not require strong convexity of the individual loss functions .
Theorem 1.
Consider Option I in Algorithm 1. Suppose that Assumptions 1, 2, and 3 hold, is strongly convex, and the fraction of Byzantine machines satisfies
(2) 
for some . Choose stepsize
. Then, with probability at least
, after parallel iterations, we havewhere
(3) 
and is defined as
(4) 
with being the inverse of the cumulative distribution function of the standard Gaussian distribution .
We prove Theorem 1 in Appendix B. In (3), we hide universal constants and a higher order term that scales as , and the factor is a function of ; as a concrete example, when . Theorem 1 together with the inequality , guarantees that after running parallel iterations, with high probability we can obtain a solution with error .
Here we achieve an error rate (defined as the distance between and the optimal solution ) of the form . In Section 6, we provide a lower bound showing that the error rate of any algorithm is . Therefore the first two terms in the upper bound cannot be improved. The third term is due to the dependence of the median on the skewness of the gradients. When each worker machine has a sufficient amount of data, more specifically , we achieve an orderoptimal error rate up to logarithmic factors.
Nonstrongly Convex Losses:
We next consider the case where the population risk function is convex, but not necessarily strongly convex. In this case, we need a mild technical assumption on the size of the parameter space .
Assumption 4 (Size of ).
The parameter space contains the following ball centered at : .
We then have the following result on the convergence rate in terms of the value of the population risk function.
Theorem 2.
Nonconvex Losses:
When is nonconvex but smooth, we need a somewhat different technical assumption on the size of .
Assumption 5 (Size of ).
Suppose that , . We assume that contains the ball , where is defined as in (3).
We have the following guarantees on the rate of convergence to a critical point of the population loss .
Theorem 3.
4.2 Guarantees for Trimmedmeanbased Gradient Descent
We next analyze the robust distributed gradient descent algorithm based on coordinatewise trimmed mean, namely Option II in Algorithm 1. Here we need stronger assumptions on the tail behavior of the partial derivatives of the loss functions—in particular, subexponentiality.
Assumption 6 (Subexponential gradients).
We assume that for all and , the partial derivative of with respect to the th coordinate of , , is subexponential.
The subexponential property implies that all the moments of the derivatives are bounded. This is a stronger assumption than the bounded absolute skewness (hence bounded third moments) required by the medianbased GD algorithm.
We use the same example as in Proposition 1 and show that the derivatives of the loss are indeed subexponential.
Proposition 2.
Consider the regression problem in Proposition 1. For all and , the partial derivative is subexponential.
Proposition 2 is proved in Appendix A.3. We now proceed to establish the statistical guarantees of the trimmedmeanbased algorithm, for different loss function classes. When the population loss is convex, we again assume that the minimizer of in is also its minimizer in . The next three theorems are analogues of Theorems 1–3 for the medianbased GD algorithm.
Strongly Convex Losses:
We have the following result.
Theorem 4.
We prove Theorem 4 in Appendix E. In (5), we hide universal constants and higher order terms that scale as or . By running parallel iterations, we can obtain a solution satisfying . Note that one needs to choose the parameter for trimmed mean to satisfy . If we set for some universal constant , we can achieve an orderoptimal error rate .
Nonstrongly Convex Losses:
Again imposing Assumption 4 on the size of , we have the following guarantee.
Theorem 5.
Nonconvex Losses:
In this case, imposing a version of Assumption 5 on the size of , we have the following.
Theorem 6.
4.3 Comparisons
We compare the performance guarantees of the above two robust distribute GD algorithms. The trimmedmeanbased algorithm achieves the statistical error rate , which is orderoptimal for strongly convex loss. In comparison, the rate of the medianbased algorithm is , which has an additional term and is only optimal when . In particular, the trimmedmeanbased algorithm has better rates when each worker machine has small local sample size—the rates are meaningful even in the extreme case . On the other hand, the medianbased algorithm requires milder tail/moment assumptions on the loss derivatives (bounded skewness) than its trimmedmean counterpart (subexponentiality). Finally, the trimmedmean operation requires an additional parameter , which can be any upper bound on the fraction of Byzantine machines in order to guarantee robustness. Using an overly large may lead to a looser bound and suboptimal performance. In contrast, medianbased GD does not require knowledge of . We summarize these observations in Table 1. We see that the two algorithms are complementary to each other, and our experiment results corroborate this point.
median GD  trimmed mean GD  

Statistical error rate  
Distribution of  Bounded skewness  Subexponential 
known?  No  Yes 
5 Robust Oneround Algorithm
As mentioned, in our distributed computing framework, the communication cost is proportional to the number of parallel iterations. The above two GD algorithms both require a number iterations depending on the desired accuracy. Can we further reduce the communication cost while keeping the algorithm Byzantinerobust and statistically optimal?
A natural candidate is the socalled oneround algorithm. Previous work has considered a standard oneround scheme where each local machine computes the empirical risk minimizer (ERM) using its local data and the master machine receives all workers’ ERMs and computes their average (Zhang et al., 2012). Clearly, a single Byzantine machine can arbitrary skew the output of this algorithm. We instead consider a Byzantinerobust oneround algorithm. As detailed in Algorithm 2, we employ the coordinatewise median operation to aggregate all the ERMs.
Our main result is a characterization of the error rate of Algorithm 2 in the presence of Byzantine failures. We are only able to establish such a guarantee when the loss functions are quadratic and . However, one can implement this algorithm in problems with other loss functions.
Definition 9 (Quadratic loss function).
The loss function is quadratic if it can be written as
where , , and , and are drawn from the distributions , , and , respectively.
Denote by , , and the expectations of , , and , respectively. Thus the population risk function takes the form .
We need a technical assumption which guarantees that each normal worker machine has unique ERM.
Assumption 7 (Strong convexity of ).
With probability , the empirical risk minimization function on each normal machine is strongly convex.
Note that this assumption is imposed on , rather than on the individual loss associated with a single data point. This assumption is satisfied, for example, when all ’s are strongly convex, or in the linear regression problems with the features drawn from some continuous distribution (e.g. isotropic Gaussian) and . We have the following guarantee for the robust oneround algorithm.
Theorem 7.
Suppose that , the loss function is convex and quadratic, is strongly convex, and Assumption 7 holds. Assume that satisfies
for some , where is a quantity that depends on , , and is monotonically decreasing in . Then, with probability at least , the output of the robust oneround algorithm satisfies
where is defined as in (4) and
with and drawn from and , respectively.
We prove Theorem 7 and provide an explicit expression of in Appendix F. In terms of the dependence on , , and , the robust oneround algorithm achieves the same error rate as the robust gradient descent algorithm based on coordinatewise median, i.e., , for quadratic problems. Again, this rate is optimal when . Therefore, at least for quadratic loss functions, the robust oneround algorithm has similar theoretical performance as the robust gradient descent algorithm with significantly less communication cost. Our experiments show that the oneround algorithm has good empirical performance for other losses as well.
6 Lower Bound
In this section, we provide a lower bound on the error rate for strongly convex losses, which implies that the term is unimprovable. This lower bound is derived using a mean estimation problem, and is an extension of the lower bounds in the robust mean estimation literature such as Chen et al. (2015), Lai et al. (2016).
We consider the problem of estimating the mean of some random variable , which is equivalent to solving the following minimization problem:
(6) 
Note that this is a special case of the general learning problem (1). We consider the same distributed setting as in Section 4, with a minor technical difference regarding the Byzantine machines. We assume that each of the worker machines is Byzantine with probability , independently of each other. The parameter is therefore the expected fraction of Byzantine machines. This setting makes the analysis slightly easier, and we believe the result can be extended to the original setting.
In this setting we have the following lower bound.
Observation 1.
Consider the distributed mean estimation problem in (6) with Byzantine failure probability , and suppose that is Gaussian distribution with mean and covariance matrix . Then, any algorithm that computes an estimation of the mean from the data has a constant probability of error .
7 Experiments
We conduct experiments to show the effectiveness of the median and trimmed mean operations. Our experiments are implemented with Tensorflow
(Abadi et al., 2016) on Microsoft Azure system. We use the MNIST (LeCun et al., 1998) dataset and randomly partition the 60,000 training data into subsamples with equal sizes. We use these subsamples to represent the data on machines.In the first experiment, we compare the performance of distributed gradient descent algorithms in the following four settings: 1) (no Byzantine machines), using vanilla distributed gradient descent (aggregating the gradients by taking the mean), 2) , using vanilla distributed gradient descent, 3) , using medianbased algorithm, and 4) , using trimmedmeanbased algorithm. We generate the Byzantine machines in the following way: we replace every training label on these machines with , e.g., is replaced with , is replaced with , etc, and the Byzantine machines simply compute gradients based on these data. We also note that when generating the Byzantine machines, we do not simply add extreme values in the features or gradients; instead, the Byzantine machines send messages to the master machine with moderate values.
We train a multiclass logistic regression model and a convolutional neural network model using distributed gradient descent, and for each model, we compare the test accuracies in the aforementioned four settings. For the convolutional neural network model, we use the stochastic version of the distributed gradient descent algorithm; more specifically, in every iteration, each worker machine computes the gradient using
of its local data. We periodically check the test errors, and the convergence performances are shown in Figure 1. The final test accuracies are presented in Tables 2 and 3.0  0.05  

Algorithm  mean  mean  median  trimmed mean 
Test accuracy (%)  88.0  76.8  87.2  86.9 
0  0.1  

Algorithm  mean  mean  median  trimmed mean 
Test accuracy (%)  94.3  77.3  87.4  90.7 
As we can see, in the adversarial settings, the vanilla distributed gradient descent algorithm suffers from severe performance loss, and using the median and trimmed mean operations, we observe significant improvement in test accuracy. This shows these two operations can indeed defend against Byzantine failures.
In the second experiment, we compare the performance of distributed oneround algorithms in the following three settings: 1) , mean aggregation, 2) , mean aggregation, and 3) , median aggregation. In this experiment, the training labels on the Byzantine machines are i.i.d. uniformly sampled from , and these machines train models using the faulty data. We choose the multiclass logistic regression model, and the test accuracies are presented in Table 4.
0  0.1  

Algorithm  mean  mean  median 
Test accuracy (%)  91.8  83.7  89.0 
As we can see, for the oneround algorithm, although the theoretical guarantee is only proved for quadratic loss, in practice, the medianbased oneround algorithm still improves the test accuracy in problems with other loss functions, such as the logistic loss here.
8 Conclusions
In this paper, we study Byzantinerobust distributed statistical learning algorithms with a focus on statistical optimality. We analyze two robust distributed gradient descent algorithms — one is based on coordinatewise median and the other is based on coordinatewise trimmed mean. We show that the trimmedmeanbased algorithm can achieve orderoptimal error rate, whereas the medianbased algorithm can achieve under weaker assumptions. We further study learning algorithms that have better communication efficiency. We propose a simple oneround algorithm that aggregates local solutions using coordinatewise median. We show that for strongly convex quadratic problems, this algorithm can achieve error rate, similar to the medianbased gradient descent algorithm. Our experiments validates the effectiveness of the median and trimmed mean operations in the adversarial setting.
Acknowledgements
D. Yin is partially supported by Berkeley DeepDrive Industry Consortium. Y. Chen is partially supported by NSF CRII award 1657420 and grant 1704828. K. Ramchandran is partially supported by NSF CIF award 1703678 and Gift award from Huawei. P. Bartlett is partially supported by NSF grant IIS1619362. Cloud computing resources are provided by a Microsoft Azure for Research award.
References

Abadi et al. (2016)
M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin,
S. Ghemawat, G. Irving, M. Isard, et al.
Tensorflow: A system for largescale machine learning.
In OSDI, volume 16, pages 265–283, 2016.  Alon et al. (1999) N. Alon, Y. Matias, and M. Szegedy. The space complexity of approximating the frequency moments. Journal of Computer and system sciences, 58(1):137–147, 1999.
 Berry (1941) A. C. Berry. The accuracy of the gaussian approximation to the sum of independent variates. Transactions of the american mathematical society, 49(1):122–136, 1941.
 Bhatia et al. (2015) K. Bhatia, P. Jain, and P. Kar. Robust regression via hard thresholding. In Advances in Neural Information Processing Systems, pages 721–729, 2015.
 Blanchard et al. (2017) P. Blanchard, E. M. E. Mhamdi, R. Guerraoui, and J. Stainer. Byzantinetolerant machine learning. arXiv preprint arXiv:1703.02757, 2017.
 Bubeck et al. (2013) S. Bubeck, N. CesaBianchi, and G. Lugosi. Bandits with heavy tail. IEEE Transactions on Information Theory, 59(11):7711–7717, 2013.
 Bubeck et al. (2015) S. Bubeck et al. Convex optimization: Algorithms and complexity. Foundations and Trends® in Machine Learning, 8(34):231–357, 2015.

Charikar et al. (2017)
M. Charikar, J. Steinhardt, and G. Valiant.
Learning from untrusted data.
In
Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing
, pages 47–60. ACM, 2017.  Chen et al. (2015) M. Chen, C. Gao, and Z. Ren. Robust covariance matrix estimation via matrix depth. arXiv preprint arXiv:1506.00691, 2015.
 Chen et al. (2017) Y. Chen, L. Su, and J. Xu. Distributed statistical machine learning in adversarial settings: Byzantine gradient descent. arXiv preprint arXiv:1705.05491, 2017.
 Diakonikolas et al. (2016) I. Diakonikolas, G. Kamath, D. M. Kane, J. Li, A. Moitra, and A. Stewart. Robust estimators in high dimensions without the computational intractability. In Foundations of Computer Science (FOCS), 2016 IEEE 57th Annual Symposium on, pages 655–664. IEEE, 2016.
 Esseen (1942) C.G. Esseen. On the Liapounoff limit of error in the theory of probability. Almqvist & Wiksell, 1942.
 Feng et al. (2014) J. Feng, H. Xu, and S. Mannor. Distributed robust learning. arXiv preprint arXiv:1409.5937, 2014.
 Hsu and Sabato (2016) D. Hsu and S. Sabato. Loss minimization and parameter estimation with heavy tails. The Journal of Machine Learning Research, 17(1):543–582, 2016.
 Huber (2011) P. J. Huber. Robust statistics. In International Encyclopedia of Statistical Science, pages 1248–1251. Springer, 2011.
 Jerrum et al. (1986) M. R. Jerrum, L. G. Valiant, and V. V. Vazirani. Random generation of combinatorial structures from a uniform distribution. Theoretical Computer Science, 43:169–188, 1986.
 Kogler and Traxler (2016) A. Kogler and P. Traxler. Efficient and robust medianofmeans algorithms for location and regression. In Symbolic and Numeric Algorithms for Scientific Computing (SYNASC), 2016 18th International Symposium on, pages 206–213. IEEE, 2016.
 Konečnỳ et al. (2016) J. Konečnỳ, H. B. McMahan, F. X. Yu, P. Richtárik, A. T. Suresh, and D. Bacon. Federated learning: Strategies for improving communication efficiency. arXiv preprint arXiv:1610.05492, 2016.
 Lai et al. (2016) K. A. Lai, A. B. Rao, and S. Vempala. Agnostic estimation of mean and covariance. In Foundations of Computer Science (FOCS), 2016 IEEE 57th Annual Symposium on, pages 665–674. IEEE, 2016.
 Lamport et al. (1982) L. Lamport, R. Shostak, and M. Pease. The byzantine generals problem. ACM Transactions on Programming Languages and Systems (TOPLAS), 4(3):382–401, 1982.
 LeCun et al. (1998) Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradientbased learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
 Lee et al. (2015) J. D. Lee, Q. Lin, T. Ma, and T. Yang. Distributed stochastic variance reduced gradient methods and a lower bound for communication complexity. arXiv preprint arXiv:1507.07595, 2015.
 Lerasle and Oliveira (2011) M. Lerasle and R. I. Oliveira. Robust empirical mean estimators. arXiv preprint arXiv:1112.3914, 2011.
 Lugosi and Mendelson (2016) G. Lugosi and S. Mendelson. Risk minimization by medianofmeans tournaments. arXiv preprint arXiv:1608.00757, 2016.
 Lugosi and Mendelson (2017) G. Lugosi and S. Mendelson. Subgaussian estimators of the mean of a random vector. arXiv preprint arXiv:1702.00482, 2017.
 McMahan and Ramage (2017) B. McMahan and D. Ramage. Federated learning: Collaborative machine learning without centralized training data. https://research.googleblog.com/2017/04/federatedlearningcollaborative.html, 2017.
 McMahan et al. (2016) H. B. McMahan, E. Moore, D. Ramage, S. Hampson, et al. Communicationefficient learning of deep networks from decentralized data. arXiv preprint arXiv:1602.05629, 2016.
 Minsker and Strawn (2017) S. Minsker and N. Strawn. Distributed statistical estimation and rates of convergence in normal approximation. arXiv preprint arXiv:1704.02658, 2017.
 Minsker et al. (2015) S. Minsker et al. Geometric median and robust estimation in banach spaces. Bernoulli, 21(4):2308–2335, 2015.
 Nemirovskii et al. (1983) A. Nemirovskii, D. B. Yudin, and E. R. Dawson. Problem complexity and method efficiency in optimization. 1983.
 Pinelis and Molzon (2016) I. Pinelis and R. Molzon. Optimalorder bounds on the rate of convergence to normality in the multivariate delta method. Electronic Journal of Statistics, 10(1):1001–1063, 2016.
 Reddi et al. (2016) S. J. Reddi, J. Konečnỳ, P. Richtárik, B. Póczós, and A. Smola. Aide: Fast and communication efficient distributed optimization. arXiv preprint arXiv:1608.06879, 2016.
 Rosenblatt and Nadler (2016) J. D. Rosenblatt and B. Nadler. On the optimality of averaging in distributed statistical learning. Information and Inference: A Journal of the IMA, 5(4):379–404, 2016.
 Shamir et al. (2014) O. Shamir, N. Srebro, and T. Zhang. Communicationefficient distributed optimization using an approximate newtontype method. In International conference on machine learning, pages 1000–1008, 2014.
 Shevtsova (2014) I. Shevtsova. On the absolute constants in the berryesseentype inequalities. In Doklady Mathematics, volume 89, pages 378–381. Springer, 2014.
 Su and Vaidya (2016a) L. Su and N. H. Vaidya. Faulttolerant multiagent optimization: optimal iterative distributed algorithms. In Proceedings of the 2016 ACM Symposium on Principles of Distributed Computing, pages 425–434. ACM, 2016a.
 Su and Vaidya (2016b) L. Su and N. H. Vaidya. Nonbayesian learning in the presence of byzantine agents. In International Symposium on Distributed Computing, pages 414–427. Springer, 2016b.
 Vershynin (2010) R. Vershynin. Introduction to the nonasymptotic analysis of random matrices. arXiv preprint arXiv:1011.3027, 2010.
 Wang et al. (2017) S. Wang, F. RoostaKhorasani, P. Xu, and M. W. Mahoney. Giant: Globally improved approximate newton method for distributed optimization. arXiv preprint arXiv:1709.03528, 2017.

Wu (2017)
Y. Wu.
Lecture notes for ece598yw: Informationtheoretic methods for highdimensional statistics.
http://www.stat.yale.edu/~yw562/teaching/itstats.pdf, 2017.  Yin et al. (2017) D. Yin, A. Pananjady, M. Lam, D. Papailiopoulos, K. Ramchandran, and P. Bartlett. Gradient diversity: a key ingredient for scalable distributed learning. arXiv preprint arXiv:1706.05699, 2017.
 Zhang and Lin (2015) Y. Zhang and X. Lin. Disco: Distributed optimization for selfconcordant empirical loss. In International conference on machine learning, pages 362–370, 2015.
 Zhang et al. (2012) Y. Zhang, M. J. Wainwright, and J. C. Duchi. Communicationefficient algorithms for statistical optimization. In Advances in Neural Information Processing Systems, pages 1502–1510, 2012.

Zhang et al. (2015)
Y. Zhang, J. Duchi, and M. Wainwright.
Divide and conquer kernel ridge regression: A distributed algorithm with minimax optimal rates.
The Journal of Machine Learning Research, 16(1):3299–3340, 2015.
Appendix
Appendix A Variance, Skewness, and Subexponential Property
a.1 Proof of Proposition 1
We use the simplified notation . One can directly compute the gradients:
and thus
Define with its th element being . We now compute the variance and absolute skewness of .
We can see that
(7) 
Thus,
(8) 
which yields
Then we proceed to bound . By Jensen’s inequality, we know that
(9) 
We first find a lower bound for . According to (8), we know that
Define the following three quantities.
(10)  
(11)  
(12) 
By simple algebra, one can check that
(13) 
and thus
(14) 
Then, we find an upper bound on . According to (7), and Hölder’s inequality, we know that
(15) 
where in the last inequality we use the moments of Gaussian random variables. Then, we compute the first term in (15). By algebra, one can obtain
(16) 
Combining (15) and (16), we get
Comments
There are no comments yet.