FedYolo: Augmenting Federated Learning with Pretrained Transformers

07/10/2023
by   Xuechen Zhang, et al.
0

The growth and diversity of machine learning applications motivate a rethinking of learning with mobile and edge devices. How can we address diverse client goals and learn with scarce heterogeneous data? While federated learning aims to address these issues, it has challenges hindering a unified solution. Large transformer models have been shown to work across a variety of tasks achieving remarkable few-shot adaptation. This raises the question: Can clients use a single general-purpose model, rather than custom models for each task, while obeying device and network constraints? In this work, we investigate pretrained transformers (PTF) to achieve these on-device learning goals and thoroughly explore the roles of model size and modularity, where the latter refers to adaptation through modules such as prompts or adapters. Focusing on federated learning, we demonstrate that: (1) Larger scale shrinks the accuracy gaps between alternative approaches and improves heterogeneity robustness. Scale allows clients to run more local SGD epochs which can significantly reduce the number of communication rounds. At the extreme, clients can achieve respectable accuracy locally highlighting the potential of fully-local learning. (2) Modularity, by design, enables >100× less communication in bits. Surprisingly, it also boosts the generalization capability of local adaptation methods and the robustness of smaller PTFs. Finally, it enables clients to solve multiple unrelated tasks simultaneously using a single PTF, whereas full updates are prone to catastrophic forgetting. These insights on scale and modularity motivate a new federated learning approach we call "You Only Load Once" (FedYolo): The clients load a full PTF model once and all future updates are accomplished through communication-efficient modules with limited catastrophic-forgetting, where each task is assigned to its own module.

READ FULL TEXT

page 14

page 16

page 18

research
04/11/2023

Re-Weighted Softmax Cross-Entropy to Control Forgetting in Federated Learning

In Federated Learning, a global model is learned by aggregating model up...
research
07/17/2022

Federated Learning and catastrophic forgetting in pervasive computing: demonstration in HAR domain

Federated Learning has been introduced as a new machine learning paradig...
research
02/05/2021

Federated Reconstruction: Partially Local Federated Learning

Personalization methods in federated learning aim to balance the benefit...
research
06/27/2020

Federated Mutual Learning

Federated learning enables collaboratively training machine learning mod...
research
06/27/2023

FedET: A Communication-Efficient Federated Class-Incremental Learning Framework Based on Enhanced Transformer

Federated Learning (FL) has been widely concerned for it enables decentr...
research
04/01/2021

Federated Few-Shot Learning with Adversarial Learning

We are interested in developing a unified machine learning model over ma...

Please sign up or login with your details

Forgot password? Click here to reset