Comfetch: Federated Learning of Large Networks on Memory-Constrained Clients via Sketching

09/17/2021
by   Tahseen Rabbani, et al.
0

A popular application of federated learning is using many clients to train a deep neural network, the parameters of which are maintained on a central server. While recent efforts have focused on reducing communication complexity, existing algorithms assume that each participating client is able to download the current and full set of parameters, which may not be a practical assumption depending on the memory constraints of clients such as mobile devices. In this work, we propose a novel algorithm Comfetch, which allows clients to train large networks using compressed versions of the global architecture via Count Sketch, thereby reducing communication and local memory costs. We provide a theoretical convergence guarantee and experimentally demonstrate that it is possible to learn large networks, such as a deep convolutional network and an LSTM, through federated agents training on their sketched counterparts. The resulting global models exhibit competitive test accuracy when compared against the state-of-the-art FetchSGD and the classical FedAvg, both of which require clients to download the full architecture.

READ FULL TEXT

Authors

page 1

page 2

page 3

page 4

03/08/2021

Personalized Federated Learning using Hypernetworks

Personalized federated learning is tasked with training machine learning...
11/17/2021

Personalized Federated Learning through Local Memorization

Federated learning allows clients to collaboratively learn statistical m...
07/15/2020

FetchSGD: Communication-Efficient Federated Learning with Sketching

Existing approaches to federated learning suffer from a communication bo...
07/30/2020

Communication-Efficient Federated Learning via Optimal Client Sampling

Federated learning is a private and efficient framework for learning mod...
10/04/2020

NLP Service APIs and Models for Efficient Registration of New Clients

State-of-the-art NLP inference uses enormous neural architectures and mo...
06/11/2019

Learning Selection Masks for Deep Neural Networks

Data have often to be moved between servers and clients during the infer...
10/23/2020

Throughput-Optimal Topology Design for Cross-Silo Federated Learning

Federated learning usually employs a client-server architecture where an...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.