Mixed Differential Privacy in Computer Vision

03/22/2022
by   Aditya Golatkar, et al.
18

We introduce AdaMix, an adaptive differentially private algorithm for training deep neural network classifiers using both private and public image data. While pre-training language models on large public datasets has enabled strong differential privacy (DP) guarantees with minor loss of accuracy, a similar practice yields punishing trade-offs in vision tasks. A few-shot or even zero-shot learning baseline that ignores private data can outperform fine-tuning on a large private dataset. AdaMix incorporates few-shot training, or cross-modal zero-shot learning, on public data prior to private fine-tuning, to improve the trade-off. AdaMix reduces the error increase from the non-private upper bound from the 167-311% of the baseline, on average across 6 datasets, to 68-92% depending on the desired privacy level selected by the user. AdaMix tackles the trade-off arising in visual classification, whereby the most privacy sensitive data, corresponding to isolated points in representation space, are also critical for high classification accuracy. In addition, AdaMix comes with strong theoretical privacy guarantees and convergence analysis.

READ FULL TEXT

page 8

page 12

page 13

page 14

research
09/13/2020

Differentially Private Language Models Benefit from Public Pre-training

Language modeling is a keystone task in natural language processing. Whe...
research
05/10/2022

Privacy Enhancement for Cloud-Based Few-Shot Learning

Requiring less data for accurate models, few-shot learning has shown rob...
research
09/21/2023

Privacy-Preserving In-Context Learning with Differentially Private Few-Shot Generation

We study the problem of in-context learning (ICL) with large language mo...
research
09/12/2023

Exploring the Benefits of Differentially Private Pre-training and Parameter-Efficient Fine-tuning for Table Transformers

For machine learning with tabular data, Table Transformer (TabTransforme...
research
06/19/2023

Pre-Pruning and Gradient-Dropping Improve Differentially Private Image Classification

Scalability is a significant challenge when it comes to applying differe...
research
05/23/2023

Selective Pre-training for Private Fine-tuning

Suppose we want to train text prediction models in email clients or word...
research
02/12/2022

Private Adaptive Optimization with Side Information

Adaptive optimization methods have become the default solvers for many m...

Please sign up or login with your details

Forgot password? Click here to reset