FederatedScope-LLM: A Comprehensive Package for Fine-tuning Large Language Models in Federated Learning

09/01/2023
by   Weirui Kuang, et al.
0

LLMs have demonstrated great capabilities in various NLP tasks. Different entities can further improve the performance of those LLMs on their specific downstream tasks by fine-tuning LLMs. When several entities have similar interested tasks, but their data cannot be shared because of privacy concerns regulations, federated learning (FL) is a mainstream solution to leverage the data of different entities. However, fine-tuning LLMs in federated learning settings still lacks adequate support from existing FL frameworks because it has to deal with optimizing the consumption of significant communication and computational resources, data preparation for different tasks, and distinct information protection demands. This paper first discusses these challenges of federated fine-tuning LLMs, and introduces our package FS-LLM as a main contribution, which consists of the following components: (1) we build an end-to-end benchmarking pipeline, automizing the processes of dataset preprocessing, federated fine-tuning execution, and performance evaluation on federated LLM fine-tuning; (2) we provide comprehensive federated parameter-efficient fine-tuning algorithm implementations and versatile programming interfaces for future extension in FL scenarios with low communication and computation costs, even without accessing the full model; (3) we adopt several accelerating and resource-efficient operators for fine-tuning LLMs with limited resources and the flexible pluggable sub-routines for interdisciplinary study. We conduct extensive experiments to validate the effectiveness of FS-LLM and benchmark advanced LLMs with state-of-the-art parameter-efficient fine-tuning algorithms in FL settings, which also yields valuable insights into federated fine-tuning LLMs for the research community. To facilitate further research and adoption, we release FS-LLM at https://github.com/alibaba/FederatedScope/tree/llm.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/20/2022

When Federated Learning Meets Pre-trained Language Models' Parameter-Efficient Tuning Methods

With increasing privacy concerns on data, recent studies have made signi...
research
08/12/2023

SLoRA: Federated Parameter Efficient Fine-Tuning of Language Models

Transfer learning via fine-tuning pre-trained transformer models has gai...
research
07/26/2023

Low-Parameter Federated Learning with Large Language Models

We study few-shot Natural Language Understanding (NLU) tasks with Large ...
research
04/12/2022

FederatedScope-GNN: Towards a Unified, Comprehensive and Efficient Package for Federated Graph Learning

The incredible development of federated learning (FL) has benefited vari...
research
05/19/2023

Federated Foundation Models: Privacy-Preserving and Collaborative Learning for Large Models

Foundation Models (FMs), such as BERT, GPT, ViT, and CLIP, have demonstr...
research
09/15/2023

FedJudge: Federated Legal Large Language Model

Large Language Models (LLMs) have gained prominence in the field of Lega...
research
02/01/2021

Scaling Federated Learning for Fine-tuning of Large Language Models

Federated learning (FL) is a promising approach to distributed compute, ...

Please sign up or login with your details

Forgot password? Click here to reset