DeepAI AI Chat
Log In Sign Up

Communication Optimization in Large Scale Federated Learning using Autoencoder Compressed Weight Updates

by   Srikanth Chandar, et al.

Federated Learning (FL) solves many of this decade's concerns regarding data privacy and computation challenges. FL ensures no data leaves its source as the model is trained at where the data resides. However, FL comes with its own set of challenges. The communication of model weight updates in this distributed environment comes with significant network bandwidth costs. In this context, we propose a mechanism of compressing the weight updates using Autoencoders (AE), which learn the data features of the weight updates and subsequently perform compression. The encoder is set up on each of the nodes where the training is performed while the decoder is set up on the node where the weights are aggregated. This setup achieves compression through the encoder and recreates the weights at the end of every communication round using the decoder. This paper shows that the dynamic and orthogonal AE based weight compression technique could serve as an advantageous alternative (or an add-on) in a large scale FL, as it not only achieves compression ratios ranging from 500x to 1720x and beyond, but can also be modified based on the accuracy requirements, computational capacity, and other requirements of the given FL setup.


page 1

page 2

page 3

page 4


Communication Efficiency in Federated Learning: Achievements and Challenges

Federated Learning (FL) is known to perform Machine Learning tasks in a ...

Towards a Secure and Reliable Federated Learning using Blockchain

Federated learning (FL) is a distributed machine learning (ML) technique...

Federated Learning with Erroneous Communication Links

In this paper, we consider the federated learning (FL) problem in the pr...

Network Adaptive Federated Learning: Congestion and Lossy Compression

In order to achieve the dual goals of privacy and learning across distri...

Surrogate Model Extension (SME): A Fast and Accurate Weight Update Attack on Federated Learning

In Federated Learning (FL) and many other distributed training framework...

Adaptive Differential Filters for Fast and Communication-Efficient Federated Learning

Federated learning (FL) scenarios inherently generate a large communicat...

Compressing Weight-updates for Image Artifacts Removal Neural Networks

In this paper, we present a novel approach for fine-tuning a decoder-sid...