Aggregating Capacity in FL through Successive Layer Training for Computationally-Constrained Devices

05/26/2023
by   Kilian Pfeiffer, et al.
0

Federated learning (FL) is usually performed on resource-constrained edge devices, e.g., with limited memory for the computation. If the required memory to train a model exceeds this limit, the device will be excluded from the training. This can lead to a lower accuracy as valuable data and computation resources are excluded from training, also causing bias and unfairness. The FL training process should be adjusted to such constraints. The state-of-the-art techniques propose training subsets of the FL model at constrained devices, reducing their resource requirements for training. But these techniques largely limit the co-adaptation among parameters of the model and are highly inefficient, as we show: it is actually better to train a smaller (less accurate) model by the system where all the devices can train the model end-to-end, than applying such techniques. We propose a new method that enables successive freezing and training of the parameters of the FL model at devices, reducing the training's resource requirements at the devices, while still allowing enough co-adaptation between parameters. We show through extensive experimental evaluation that our technique greatly improves the accuracy of the trained model (by 52.4 p.p.) compared with the state of the art, efficiently aggregating the computation capacity available on distributed devices.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/19/2021

FedHe: Heterogeneous Models and Communication-Efficient Federated Learning

Federated learning (FL) is able to manage edge devices to cooperatively ...
research
07/21/2023

MAS: Towards Resource-Efficient Federated Multiple-Task Learning

Federated learning (FL) is an emerging distributed machine learning meth...
research
03/10/2022

CoCo-FL: Communication- and Computation-Aware Federated Learning via Partial NN Freezing and Quantization

Devices participating in federated learning (FL) typically have heteroge...
research
09/08/2022

FADE: Enabling Large-Scale Federated Adversarial Training on Resource-Constrained Edge Devices

Adversarial Training (AT) has been proven to be an effective method of i...
research
12/16/2021

DISTREAL: Distributed Resource-Aware Learning in Heterogeneous Systems

We study the problem of distributed training of neural networks (NNs) on...
research
07/28/2020

Group Knowledge Transfer: Collaborative Training of Large CNNs on the Edge

Scaling up the convolutional neural network (CNN) size (e.g., width, dep...
research
12/05/2022

FedTiny: Pruned Federated Learning Towards Specialized Tiny Models

Neural network pruning has been a well-established compression technique...

Please sign up or login with your details

Forgot password? Click here to reset