Reduce Communication Costs and Preserve Privacy: Prompt Tuning Method in Federated Learning

08/25/2022
by   Haodong Zhao, et al.
0

Federated learning (FL) has enabled global model training on decentralized data in a privacy-preserving way by aggregating model updates. However, for many natural language processing (NLP) tasks that utilize pre-trained language models (PLMs) with large numbers of parameters, there are considerable communication costs associated with FL. Recently, prompt tuning, which tunes some soft prompts without modifying PLMs, has achieved excellent performance as a new learning paradigm. Therefore we want to combine the two methods and explore the effect of prompt tuning under FL. In this paper, we propose "FedPrompt" as the first work study prompt tuning in a model split learning way using FL, and prove that split learning greatly reduces the communication cost, only 0.01 IID and Non-IID data distribution. This improves the efficiency of FL method while also protecting the data privacy in prompt tuning.In addition, like PLMs, prompts are uploaded and downloaded between public platforms and personal users, so we try to figure out whether there is still a backdoor threat using only soft prompt in FL scenarios. We further conduct backdoor attacks by data poisoning on FedPrompt. Our experiments show that normal backdoor attack can not achieve a high attack success rate, proving the robustness of FedPrompt.We hope this work can promote the application of prompt in FL and raise the awareness of the possible security threats.

READ FULL TEXT

page 1

page 3

research
12/20/2022

When Federated Learning Meets Pre-trained Language Models' Parameter-Efficient Tuning Methods

With increasing privacy concerns on data, recent studies have made signi...
research
07/26/2023

Low-Parameter Federated Learning with Large Language Models

We study few-shot Natural Language Understanding (NLU) tasks with Large ...
research
07/01/2022

Visual Transformer Meets CutMix for Improved Accuracy, Communication Efficiency, and Data Privacy in Split Learning

This article seeks for a distributed learning solution for the visual tr...
research
11/15/2022

FedTune: A Deep Dive into Efficient Federated Fine-Tuning with Pre-trained Transformers

Federated Learning (FL) is an emerging paradigm that enables distributed...
research
06/06/2022

Pretrained Models for Multilingual Federated Learning

Since the advent of Federated Learning (FL), research has applied these ...
research
12/22/2022

Model Segmentation for Storage Efficient Private Federated Learning with Top r Sparsification

In federated learning (FL) with top r sparsification, millions of users ...
research
06/12/2022

Neurotoxin: Durable Backdoors in Federated Learning

Due to their decentralized nature, federated learning (FL) systems have ...

Please sign up or login with your details

Forgot password? Click here to reset