When Federated Learning Meets Pre-trained Language Models' Parameter-Efficient Tuning Methods

12/20/2022
by   Zhuo Zhang, et al.
1

With increasing privacy concerns on data, recent studies have made significant progress using federated learning (FL) on privacy-sensitive natural language processing (NLP) tasks. Much literature suggests fully fine-tuning pre-trained language models (PLMs) in the FL paradigm can mitigate the data heterogeneity problem and close the performance gap with centralized training. However, large PLMs bring the curse of prohibitive communication overhead and local model adaptation costs for the FL system. To this end, we introduce various parameter-efficient tuning (PETuning) methods into federated learning. Specifically, we provide a holistic empirical study of representative PLMs tuning methods in FL. The experimental results cover the analysis of data heterogeneity levels, data scales, and different FL scenarios. Overall communication overhead can be significantly reduced by locally tuning and globally aggregating lightweight model parameters while maintaining acceptable performance in various FL settings. To facilitate the research of PETuning in FL, we also develop a federated tuning framework FedPETuning, which allows practitioners to exploit different PETuning methods under the FL training paradigm conveniently. The source code is available at <https://github.com/iezhuozhuo/FedETuning/tree/deltaTuning>.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/25/2022

Reduce Communication Costs and Preserve Privacy: Prompt Tuning Method in Federated Learning

Federated learning (FL) has enabled global model training on decentraliz...
research
08/12/2023

SLoRA: Federated Parameter Efficient Fine-Tuning of Language Models

Transfer learning via fine-tuning pre-trained transformer models has gai...
research
09/01/2023

FederatedScope-LLM: A Comprehensive Package for Fine-tuning Large Language Models in Federated Learning

LLMs have demonstrated great capabilities in various NLP tasks. Differen...
research
07/26/2023

Low-Parameter Federated Learning with Large Language Models

We study few-shot Natural Language Understanding (NLU) tasks with Large ...
research
09/15/2023

FedJudge: Federated Legal Large Language Model

Large Language Models (LLMs) have gained prominence in the field of Lega...
research
12/12/2022

Collaborating Heterogeneous Natural Language Processing Tasks via Federated Learning

The increasing privacy concerns on personal private text data promote th...
research
05/07/2023

MrTF: Model Refinery for Transductive Federated Learning

We consider a real-world scenario in which a newly-established pilot pro...

Please sign up or login with your details

Forgot password? Click here to reset