Differentially Private Image Classification by Learning Priors from Random Processes
In privacy-preserving machine learning, differentially private stochastic gradient descent (DP-SGD) performs worse than SGD due to per-sample gradient clipping and noise addition. A recent focus in private learning research is improving the performance of DP-SGD on private data by incorporating priors that are learned on real-world public data. In this work, we explore how we can improve the privacy-utility tradeoff of DP-SGD by learning priors from images generated by random processes and transferring these priors to private data. We propose DP-RandP, a three-phase approach. We attain new state-of-the-art accuracy when training from scratch on CIFAR10, CIFAR100, and MedMNIST for a range of privacy budgets ε∈ [1, 8]. In particular, we improve the previous best reported accuracy on CIFAR10 from 60.6 % to 72.3 % for ε=1. Our code is available at https://github.com/inspire-group/DP-RandP.
READ FULL TEXT