DONE: Distributed Newton-type Method for Federated Edge Learning

12/10/2020
by   Canh T. Dinh, et al.
0

There is growing interest in applying distributed machine learning to edge computing, forming federated edge learning. Compared with conventional distributed machine learning in a datacenter, federated edge learning faces non-independent and identically distributed (non-i.i.d.) and heterogeneous data, and the communications between edge workers, possibly through distant locations with unstable wireless networks, are more costly than their local computational overhead. In this work, we propose a distributed Newton-type algorithm (DONE) with fast convergence rate for communication-efficient federated edge learning. First, with strongly convex and smooth loss functions, we show that DONE can produce the Newton direction approximately in a distributed manner by using the classical Richardson iteration on each edge worker. Second, we prove that DONE has linear-quadratic convergence and analyze its computation and communication complexities. Finally, the experimental results with non-i.i.d. and heterogeneous data show that DONE attains the same performance as the Newton's method. Notably, DONE requires considerably fewer communication iterations compared to the distributed gradient descent algorithm and outperforms DANE, a state-of-the-art, in the case of non-quadratic loss functions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/14/2021

Resource-constrained Federated Edge Learning with Heterogeneous Data: Formulation and Analysis

Efficient collaboration between collaborative machine learning and wirel...
research
05/13/2023

Network-GIANT: Fully distributed Newton-type optimization via harmonic Hessian consensus

This paper considers the problem of distributed multi-agent learning, wh...
research
09/11/2017

GIANT: Globally Improved Approximate Newton Method for Distributed Optimization

For distributed computing environments, we consider the canonical machin...
research
08/06/2019

On Convergence of Distributed Approximate Newton Methods: Globalization, Sharper Bounds and Beyond

The DANE algorithm is an approximate Newton method popularly used for co...
research
07/26/2021

Accelerated Gradient Descent Learning over Multiple Access Fading Channels

We consider a distributed learning problem in a wireless network, consis...
research
08/30/2020

SEEC: Semantic Vector Federation across Edge Computing Environments

Semantic vector embedding techniques have proven useful in learning sema...
research
02/14/2021

Communication-Efficient Distributed Optimization with Quantized Preconditioners

We investigate fast and communication-efficient algorithms for the class...

Please sign up or login with your details

Forgot password? Click here to reset