ViP: A Differentially Private Foundation Model for Computer Vision

06/15/2023
by   Yaodong Yu, et al.
0

Artificial intelligence (AI) has seen a tremendous surge in capabilities thanks to the use of foundation models trained on internet-scale data. On the flip side, the uncurated nature of internet-scale data also poses significant privacy and legal risks, as they often contain personal information or copyrighted material that should not be trained on without permission. In this work, we propose as a mitigation measure a recipe to train foundation vision models with differential privacy (DP) guarantee. We identify masked autoencoders as a suitable learning algorithm that aligns well with DP-SGD, and train ViP – a Vision transformer with differential Privacy – under a strict privacy budget of ϵ=8 on the LAION400M dataset. We evaluate the quality of representation learned by ViP using standard downstream vision tasks; in particular, ViP achieves a (non-private) linear probing accuracy of 55.7% on ImageNet, comparable to that of end-to-end trained AlexNet (trained and evaluated on ImageNet). Our result suggests that scaling to internet-scale data can be practical for private learning. Code is available at <https://github.com/facebookresearch/ViP-MAE>.

READ FULL TEXT
research
06/28/2019

DP-LSSGD: A Stochastic Optimization Method to Lift the Utility in Privacy-Preserving ERM

Machine learning (ML) models trained by differentially private stochasti...
research
06/24/2023

Differentially Private Decentralized Deep Learning with Consensus Algorithms

Cooperative decentralized deep learning relies on direct information exc...
research
08/21/2023

Unlocking Accuracy and Fairness in Differentially Private Image Classification

Privacy-preserving machine learning aims to train models on private data...
research
10/07/2022

TAN without a burn: Scaling Laws of DP-SGD

Differentially Private methods for training Deep Neural Networks (DNNs) ...
research
05/20/2022

Can Foundation Models Wrangle Your Data?

Foundation Models (FMs) are models trained on large corpora of data that...
research
07/29/2022

Content-Aware Differential Privacy with Conditional Invertible Neural Networks

Differential privacy (DP) has arisen as the gold standard in protecting ...

Please sign up or login with your details

Forgot password? Click here to reset