Differentially Private Bias-Term only Fine-tuning of Foundation Models

09/30/2022
by   Zhiqi Bu, et al.
0

We study the problem of differentially private (DP) fine-tuning of large pre-trained models – a recent privacy-preserving approach suitable for solving downstream tasks with sensitive data. Existing work has demonstrated that high accuracy is possible under strong privacy constraint, yet requires significant computational overhead or modifications to the network architecture. We propose differentially private bias-term fine-tuning (DP-BiTFiT), which matches the state-of-the-art accuracy for DP algorithms and the efficiency of the standard BiTFiT. DP-BiTFiT is model agnostic (not modifying the network architecture), parameter efficient (only training about 0.1% of the parameters), and computation efficient (almost removing the overhead caused by DP, in both the time and space complexity). On a wide range of tasks, DP-BiTFiT is 2∼ 30× faster and uses 2∼ 8× less memory than DP full fine-tuning, even faster than the standard full fine-tuning. This amazing efficiency enables us to conduct DP fine-tuning on language and vision tasks with long-sequence texts and high-resolution images, which were computationally difficult using existing methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/12/2023

Exploring the Benefits of Differentially Private Pre-training and Parameter-Efficient Fine-tuning for Table Transformers

For machine learning with tabular data, Table Transformer (TabTransforme...
research
10/13/2021

Differentially Private Fine-tuning of Language Models

We give simpler, sparser, and faster algorithms for differentially priva...
research
05/19/2023

Differentially Private Adapters for Parameter Efficient Acoustic Modeling

In this work, we devise a parameter-efficient solution to bring differen...
research
10/02/2019

Improving Differentially Private Models with Active Learning

Broad adoption of machine learning techniques has increased privacy conc...
research
09/30/2022

Differentially Private Optimization on Large Model at Small Cost

Differentially private (DP) optimization is the standard paradigm to lea...
research
05/21/2022

Scalable and Efficient Training of Large Convolutional Neural Networks with Differential Privacy

Large convolutional neural networks (CNN) can be difficult to train in t...
research
10/26/2022

EW-Tune: A Framework for Privately Fine-Tuning Large Language Models with Differential Privacy

Pre-trained Large Language Models (LLMs) are an integral part of modern ...

Please sign up or login with your details

Forgot password? Click here to reset