Sensitivity analysis in differentially private machine learning using hybrid automatic differentiation

07/09/2021
by   Alexander Ziller, et al.
0

In recent years, formal methods of privacy protection such as differential privacy (DP), capable of deployment to data-driven tasks such as machine learning (ML), have emerged. Reconciling large-scale ML with the closed-form reasoning required for the principled analysis of individual privacy loss requires the introduction of new tools for automatic sensitivity analysis and for tracking an individual's data and their features through the flow of computation. For this purpose, we introduce a novel hybrid automatic differentiation (AD) system which combines the efficiency of reverse-mode AD with an ability to obtain a closed-form expression for any given quantity in the computational graph. This enables modelling the sensitivity of arbitrary differentiable function compositions, such as the training of neural networks on private data. We demonstrate our approach by analysing the individual DP guarantees of statistical database queries. Moreover, we investigate the application of our technique to the training of DP neural networks. Our approach can enable the principled reasoning about privacy loss in the setting of data processing, and further the development of automatic sensitivity analysis and privacy budgeting systems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/22/2021

An automatic differentiation system for the age of differential privacy

We introduce Tritium, an automatic differentiation-based sensitivity ana...
research
09/22/2021

Partial sensitivity analysis in differential privacy

Differential privacy (DP) allows the quantification of privacy loss when...
research
08/26/2022

DiVa: An Accelerator for Differentially Private Machine Learning

The widespread deployment of machine learning (ML) is raising serious co...
research
10/25/2021

DP-XGBoost: Private Machine Learning at Scale

The big-data revolution announced ten years ago does not seem to have fu...
research
01/01/2021

Disclosure Risk from Homogeneity Attack in Differentially Private Frequency Distribution

Homogeneity attack allows adversaries to obtain the exact values on the ...
research
11/12/2022

Multi-Epoch Matrix Factorization Mechanisms for Private Machine Learning

We introduce new differentially private (DP) mechanisms for gradient-bas...

Please sign up or login with your details

Forgot password? Click here to reset