Unlocking Accuracy and Fairness in Differentially Private Image Classification

08/21/2023
by   Leonard Berrada, et al.
0

Privacy-preserving machine learning aims to train models on private data without leaking sensitive information. Differential privacy (DP) is considered the gold standard framework for privacy-preserving training, as it provides formal privacy guarantees. However, compared to their non-private counterparts, models trained with DP often have significantly reduced accuracy. Private classifiers are also believed to exhibit larger performance disparities across subpopulations, raising fairness concerns. The poor performance of classifiers trained with DP has prevented the widespread adoption of privacy preserving machine learning in industry. Here we show that pre-trained foundation models fine-tuned with DP can achieve similar accuracy to non-private classifiers, even in the presence of significant distribution shifts between pre-training data and downstream tasks. We achieve private accuracies within a few percent of the non-private state of the art across four datasets, including two medical imaging benchmarks. Furthermore, our private medical classifiers do not exhibit larger performance disparities across demographic groups than non-private models. This milestone to make DP training a practical and reliable technology has the potential to widely enable machine learning practitioners to train safely on sensitive datasets while protecting individuals' privacy.

READ FULL TEXT

page 1

page 3

page 5

page 7

page 23

research
10/13/2020

Chasing Your Long Tails: Differentially Private Prediction in Health Care Settings

Machine learning models in health care are often deployed in settings wh...
research
02/03/2023

Private, fair and accurate: Training large-scale, privacy-preserving AI models in radiology

Artificial intelligence (AI) models are increasingly used in the medical...
research
10/02/2019

Improving Differentially Private Models with Active Learning

Broad adoption of machine learning techniques has increased privacy conc...
research
09/21/2023

Privacy-Preserving In-Context Learning with Differentially Private Few-Shot Generation

We study the problem of in-context learning (ICL) with large language mo...
research
06/15/2023

ViP: A Differentially Private Foundation Model for Computer Vision

Artificial intelligence (AI) has seen a tremendous surge in capabilities...
research
05/22/2023

EXACT: Extensive Attack for Split Learning

Privacy-Preserving machine learning (PPML) can help us train and deploy ...
research
06/13/2023

Safeguarding Data in Multimodal AI: A Differentially Private Approach to CLIP Training

The surge in multimodal AI's success has sparked concerns over data priv...

Please sign up or login with your details

Forgot password? Click here to reset