Fine-Tuning with Differential Privacy Necessitates an Additional Hyperparameter Search

10/05/2022
by   Yannis Cattan, et al.
8

Models need to be trained with privacy-preserving learning algorithms to prevent leakage of possibly sensitive information contained in their training data. However, canonical algorithms like differentially private stochastic gradient descent (DP-SGD) do not benefit from model scale in the same way as non-private learning. This manifests itself in the form of unappealing tradeoffs between privacy and utility (accuracy) when using DP-SGD on complex tasks. To remediate this tension, a paradigm is emerging: fine-tuning with differential privacy from a model pretrained on public (i.e., non-sensitive) training data. In this work, we identify an oversight of existing approaches for differentially private fine tuning. They do not tailor the fine-tuning approach to the specifics of learning with privacy. Our main result is to show how carefully selecting the layers being fine-tuned in the pretrained neural network allows us to establish new state-of-the-art tradeoffs between privacy and accuracy. For instance, we achieve 77.9 δ)=(2, 10^-5)on CIFAR-100 for a model pretrained on ImageNet. Our work calls for additional hyperparameter search to configure the differentially private fine-tuning procedure itself.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/07/2021

Hyperparameter Tuning with Renyi Differential Privacy

For many differentially private algorithms, such as the prominent noisy ...
research
09/13/2020

Differentially Private Language Models Benefit from Public Pre-training

Language modeling is a keystone task in natural language processing. Whe...
research
10/02/2019

Improving Differentially Private Models with Active Learning

Broad adoption of machine learning techniques has increased privacy conc...
research
10/12/2021

Large Language Models Can Be Strong Differentially Private Learners

Differentially Private (DP) learning has seen limited success for buildi...
research
07/14/2021

An Efficient DP-SGD Mechanism for Large Scale NLP Models

Recent advances in deep learning have drastically improved performance o...
research
02/27/2023

Differentially Private Diffusion Models Generate Useful Synthetic Images

The ability to generate privacy-preserving synthetic versions of sensiti...
research
12/07/2022

A Study on Extracting Named Entities from Fine-tuned vs. Differentially Private Fine-tuned BERT Models

Privacy preserving deep learning is an emerging field in machine learnin...

Please sign up or login with your details

Forgot password? Click here to reset