DeepAI AI Chat
Log In Sign Up

Communication Optimization in Large Scale Federated Learning using Autoencoder Compressed Weight Updates

08/12/2021
by   Srikanth Chandar, et al.
Intel
0

Federated Learning (FL) solves many of this decade's concerns regarding data privacy and computation challenges. FL ensures no data leaves its source as the model is trained at where the data resides. However, FL comes with its own set of challenges. The communication of model weight updates in this distributed environment comes with significant network bandwidth costs. In this context, we propose a mechanism of compressing the weight updates using Autoencoders (AE), which learn the data features of the weight updates and subsequently perform compression. The encoder is set up on each of the nodes where the training is performed while the decoder is set up on the node where the weights are aggregated. This setup achieves compression through the encoder and recreates the weights at the end of every communication round using the decoder. This paper shows that the dynamic and orthogonal AE based weight compression technique could serve as an advantageous alternative (or an add-on) in a large scale FL, as it not only achieves compression ratios ranging from 500x to 1720x and beyond, but can also be modified based on the accuracy requirements, computational capacity, and other requirements of the given FL setup.

READ FULL TEXT

page 1

page 2

page 3

page 4

07/23/2021

Communication Efficiency in Federated Learning: Achievements and Challenges

Federated Learning (FL) is known to perform Machine Learning tasks in a ...
01/27/2022

Towards a Secure and Reliable Federated Learning using Blockchain

Federated learning (FL) is a distributed machine learning (ML) technique...
01/31/2022

Federated Learning with Erroneous Communication Links

In this paper, we consider the federated learning (FL) problem in the pr...
01/11/2023

Network Adaptive Federated Learning: Congestion and Lossy Compression

In order to achieve the dual goals of privacy and learning across distri...
05/31/2023

Surrogate Model Extension (SME): A Fast and Accurate Weight Update Attack on Federated Learning

In Federated Learning (FL) and many other distributed training framework...
04/09/2022

Adaptive Differential Filters for Fast and Communication-Efficient Federated Learning

Federated learning (FL) scenarios inherently generate a large communicat...
05/10/2019

Compressing Weight-updates for Image Artifacts Removal Neural Networks

In this paper, we present a novel approach for fine-tuning a decoder-sid...