Scaling Federated Learning for Fine-tuning of Large Language Models

02/01/2021
by   Agrin Hilmkil, et al.
0

Federated learning (FL) is a promising approach to distributed compute, as well as distributed data, and provides a level of privacy and compliance to legal frameworks. This makes FL attractive for both consumer and healthcare applications. While the area is actively being explored, few studies have examined FL in the context of larger language models and there is a lack of comprehensive reviews of robustness across tasks, architectures, numbers of clients, and other relevant factors. In this paper, we explore the fine-tuning of Transformer-based language models in a federated learning setting. We evaluate three popular BERT-variants of different sizes (BERT, ALBERT, and DistilBERT) on a number of text classification tasks such as sentiment analysis and author identification. We perform an extensive sweep over the number of clients, ranging up to 32, to evaluate the impact of distributed compute on task performance in the federated averaging setting. While our findings suggest that the large sizes of the evaluated models are not generally prohibitive to federated training, we found that the different models handle federated averaging to a varying degree. Most notably, DistilBERT converges significantly slower with larger numbers of clients, and under some circumstances, even collapses to chance level performance. Investigating this issue presents an interesting perspective for future research.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/12/2023

SLoRA: Federated Parameter Efficient Fine-Tuning of Language Models

Transfer learning via fine-tuning pre-trained transformer models has gai...
research
09/15/2023

FedJudge: Federated Legal Large Language Model

Large Language Models (LLMs) have gained prominence in the field of Lega...
research
07/26/2023

Low-Parameter Federated Learning with Large Language Models

We study few-shot Natural Language Understanding (NLU) tasks with Large ...
research
08/15/2023

NeFL: Nested Federated Learning for Heterogeneous Clients

Federated learning (FL) is a promising approach in distributed learning ...
research
09/01/2023

FederatedScope-LLM: A Comprehensive Package for Fine-tuning Large Language Models in Federated Learning

LLMs have demonstrated great capabilities in various NLP tasks. Differen...
research
12/14/2022

Hierarchical Over-the-Air FedGradNorm

Multi-task learning (MTL) is a learning paradigm to learn multiple relat...
research
10/22/2021

Federated Unlearning via Class-Discriminative Pruning

We explore the problem of selectively forgetting categories from trained...

Please sign up or login with your details

Forgot password? Click here to reset