Multi-Epoch Matrix Factorization Mechanisms for Private Machine Learning

We introduce new differentially private (DP) mechanisms for gradient-based machine learning (ML) training involving multiple passes (epochs) of a dataset, substantially improving the achievable privacy-utility-computation tradeoffs. Our key contribution is an extension of the online matrix factorization DP mechanism to multiple participations, substantially generalizing the approach of DMRST2022. We first give conditions under which it is possible to reduce the problem with per-iteration vector contributions to the simpler one of scalar contributions. Using this, we formulate the construction of optimal (in total squared error at each iterate) matrix mechanisms for SGD variants as a convex program. We propose an efficient optimization algorithm via a closed form solution to the dual function. While tractable, both solving the convex problem offline and computing the necessary noise masks during training can become prohibitively expensive when many training steps are necessary. To address this, we design a Fourier-transform-based mechanism with significantly less computation and only a minor utility decrease. Extensive empirical evaluation on two tasks: example-level DP for image classification and user-level DP for language modeling, demonstrate substantial improvements over the previous state-of-the-art. Though our primary application is to ML, we note our main DP results are applicable to arbitrary linear queries and hence may have much broader applicability.

READ FULL TEXT
research
06/13/2023

(Amplified) Banded Matrix Factorization: A unified approach to private training

Matrix factorization (MF) mechanisms for differential privacy (DP) have ...
research
02/16/2022

Private Online Prefix Sums via Optimal Matrix Factorizations

Motivated by differentially-private (DP) training of machine learning mo...
research
08/26/2022

DiVa: An Accelerator for Differentially Private Machine Learning

The widespread deployment of machine learning (ML) is raising serious co...
research
02/02/2023

Convergence of Gradient Descent with Linearly Correlated Noise and Applications to Differentially Private Learning

We study stochastic optimization with linearly correlated noise. Our stu...
research
10/04/2022

Recycling Scraps: Improving Private Learning by Leveraging Intermediate Checkpoints

All state-of-the-art (SOTA) differentially private machine learning (DP ...
research
07/09/2021

Sensitivity analysis in differentially private machine learning using hybrid automatic differentiation

In recent years, formal methods of privacy protection such as differenti...

Please sign up or login with your details

Forgot password? Click here to reset