Coded Computing for Low-Latency Federated Learning over Wireless Edge Networks

11/12/2020
by   Saurav Prakash, et al.
0

Federated learning enables training a global model from data located at the client nodes, without data sharing and moving client data to a centralized server. Performance of federated learning in a multi-access edge computing (MEC) network suffers from slow convergence due to heterogeneity and stochastic fluctuations in compute power and communication link qualities across clients. We propose a novel coded computing framework, CodedFedL, that injects structured coding redundancy into federated learning for mitigating stragglers and speeding up the training procedure. CodedFedL enables coded computing for non-linear federated learning by efficiently exploiting distributed kernel embedding via random Fourier features that transforms the training task into computationally favourable distributed linear regression. Furthermore, clients generate local parity datasets by coding over their local datasets, while the server combines them to obtain the global parity dataset. Gradient from the global parity dataset compensates for straggling gradients during training, and thereby speeds up convergence. For minimizing the epoch deadline time at the MEC server, we provide a tractable approach for finding the amount of coding redundancy and the number of local data points that a client processes during training, by exploiting the statistical properties of compute as well as communication delays. We also characterize the leakage in data privacy when clients share their local parity datasets with the server. We analyze the convergence rate and iteration complexity of CodedFedL under simplifying assumptions, by treating CodedFedL as a stochastic gradient descent algorithm. Furthermore, we conduct numerical experiments using practical network parameters and benchmark datasets, where CodedFedL speeds up the overall training time by up to 15× in comparison to the benchmark schemes.

READ FULL TEXT

page 1

page 6

page 17

research
07/07/2020

Coded Computing for Federated Learning at the Edge

Federated Learning (FL) is an exciting new paradigm that enables trainin...
research
02/21/2020

Coded Federated Learning

Federated learning is a method of training a global model from decentral...
research
02/17/2022

Federated Stochastic Gradient Descent Begets Self-Induced Momentum

Federated learning (FL) is an emerging machine learning method that can ...
research
01/09/2023

Federated Coded Matrix Inversion

Federated learning (FL) is a decentralized model for training data distr...
research
05/31/2022

Secure Federated Clustering

We consider a foundational unsupervised learning task of k-means data cl...
research
06/22/2020

Exact Support Recovery in Federated Regression with One-shot Communication

Federated learning provides a framework to address the challenges of dis...
research
06/15/2022

Global Convergence of Federated Learning for Mixed Regression

This paper studies the problem of model training under Federated Learnin...

Please sign up or login with your details

Forgot password? Click here to reset