CoCo-FL: Communication- and Computation-Aware Federated Learning via Partial NN Freezing and Quantization

03/10/2022
by   Kilian Pfeiffer, et al.
0

Devices participating in federated learning (FL) typically have heterogeneous communication and computation resources. However, all devices need to finish training by the same deadline dictated by the server when applying synchronous FL, as we consider in this paper. Reducing the complexity of the trained neural network (NN) at constrained devices, i.e., by dropping neurons/filters, is insufficient as it tightly couples reductions in communication and computation requirements, wasting resources. Quantization has proven effective to accelerate inference, but quantized training suffers from accuracy losses. We present a novel mechanism that quantizes during training parts of the NN to reduce the computation requirements, freezes them to reduce the communication and computation requirements, and trains the remaining parts in full precision to maintain a high convergence speed and final accuracy. Using this mechanism, we present the first FL technique that independently optimizes for specific communication and computation constraints in FL: CoCo-FL. We show that CoCo-FL reaches a much higher convergence speed than the state of the art and a significantly higher final accuracy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/19/2021

FedHe: Heterogeneous Models and Communication-Efficient Federated Learning

Federated learning (FL) is able to manage edge devices to cooperatively ...
research
12/16/2021

DISTREAL: Distributed Resource-Aware Learning in Heterogeneous Systems

We study the problem of distributed training of neural networks (NNs) on...
research
03/11/2022

Wireless Quantized Federated Learning: A Joint Computation and Communication Design

Recently, federated learning (FL) has sparked widespread attention as a ...
research
05/26/2023

Aggregating Capacity in FL through Successive Layer Training for Computationally-Constrained Devices

Federated learning (FL) is usually performed on resource-constrained edg...
research
04/21/2020

Lottery Hypothesis based Unsupervised Pre-training for Model Compression in Federated Learning

Federated learning (FL) enables a neural network (NN) to be trained usin...
research
04/11/2023

Communication Efficient DNN Partitioning-based Federated Learning

Efficiently running federated learning (FL) on resource-constrained devi...
research
08/05/2021

Multi-task Federated Edge Learning (MtFEEL) in Wireless Networks

Federated Learning (FL) has evolved as a promising technique to handle d...

Please sign up or login with your details

Forgot password? Click here to reset