CatFedAvg: Optimising Communication-efficiency and Classification Accuracy in Federated Learning

11/14/2020
by   Dipankar Sarkar, et al.
0

Federated learning has allowed the training of statistical models over remote devices without the transfer of raw client data. In practice, training in heterogeneous and large networks introduce novel challenges in various aspects like network load, quality of client data, security and privacy. Recent works in FL have worked on improving communication efficiency and addressing uneven client data distribution independently, but none have provided a unified solution for both challenges. We introduce a new family of Federated Learning algorithms called CatFedAvg which not only improves the communication efficiency but improves the quality of learning using a category coverage maximization strategy. We use the FedAvg framework and introduce a simple and efficient step every epoch to collect meta-data about the client's training data structure which the central server uses to request a subset of weight updates. We explore two distinct variations which allow us to further explore the tradeoffs between communication efficiency and model accuracy. Our experiments based on a vision classification task have shown that an increase of 10 accuracy using the MNIST dataset with 70 transfer over FedAvg. We also run similar experiments with Fashion MNIST, KMNIST-10, KMNIST-49 and EMNIST-47. Further, under extreme data imbalance experiments for both globally and individual clients, we see the model performing better than FedAvg. The ablation study further explores its behaviour under varying data and client parameter conditions showcasing the robustness of the proposed approach.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/30/2020

Communication-Efficient Federated Learning via Optimal Client Sampling

Federated learning is a private and efficient framework for learning mod...
research
06/22/2023

Communication-Efficient Federated Learning through Importance Sampling

The high communication cost of sending model updates from the clients to...
research
07/12/2022

Federated Unlearning: How to Efficiently Erase a Client in FL?

With privacy legislation empowering users with the right to be forgotten...
research
01/09/2022

A Multi-agent Reinforcement Learning Approach for Efficient Client Selection in Federated Learning

Federated learning (FL) is a training technique that enables client devi...
research
11/12/2020

Fed-Focal Loss for imbalanced data classification in Federated Learning

The Federated Learning setting has a central server coordinating the tra...
research
04/01/2021

Federated Few-Shot Learning with Adversarial Learning

We are interested in developing a unified machine learning model over ma...
research
07/05/2023

FLuID: Mitigating Stragglers in Federated Learning using Invariant Dropout

Federated Learning (FL) allows machine learning models to train locally ...

Please sign up or login with your details

Forgot password? Click here to reset