Distributed Newton Can Communicate Less and Resist Byzantine Workers

06/15/2020
by   Avishek Ghosh, et al.
0

We develop a distributed second order optimization algorithm that is communication-efficient as well as robust against Byzantine failures of the worker machines. We propose COMRADE (COMunication-efficient and Robust Approximate Distributed nEwton), an iterative second order algorithm, where the worker machines communicate only once per iteration with the center machine. This is in sharp contrast with the state-of-the-art distributed second order algorithms like GIANT [34] and DINGO[7], where the worker machines send (functions of) local gradient and Hessian sequentially; thus ending up communicating twice with the center machine per iteration. Moreover, we show that the worker machines can further compress the local information before sending it to the center. In addition, we employ a simple norm based thresholding rule to filter-out the Byzantine worker machines. We establish the linear-quadratic rate of convergence of COMRADE and establish that the communication savings and Byzantine resilience result in only a small statistical error rate for arbitrary convex loss functions. To the best of our knowledge, this is the first work that addresses the issue of Byzantine resilience in second order distributed optimization. Furthermore, we validate our theoretical results with extensive experiments on synthetic and benchmark LIBSVM [5] data-sets and demonstrate convergence guarantees.

READ FULL TEXT

Authors

page 1

page 2

page 3

page 4

08/15/2021

Efficient Byzantine-Resilient Stochastic Gradient Desce

Distributed Learning often suffers from Byzantine failures, and there ha...
03/17/2021

Escaping Saddle Points in Distributed Newton's Method with Communication efficiency and Byzantine Resilience

We study the problem of optimizing a non-convex loss function (with sadd...
05/16/2021

LocalNewton: Reducing Communication Bottleneck for Distributed Learning

To address the communication bottleneck problem in distributed optimizat...
06/12/2019

Communication-Efficient Accurate Statistical Estimation

When the data are stored in a distributed manner, direct application of ...
07/22/2019

Practical Newton-Type Distributed Learning using Gradient Based Approximations

We study distributed algorithms for expected loss minimization where the...
03/05/2018

Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates

In large-scale distributed learning, security issues have become increas...
03/22/2017

Weight Design of Distributed Approximate Newton Algorithms for Constrained Optimization

Motivated by economic dispatch and linearly-constrained resource allocat...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.