A General Framework for Auditing Differentially Private Machine Learning

10/16/2022
by   Fred Lu, et al.
0

We present a framework to statistically audit the privacy guarantee conferred by a differentially private machine learner in practice. While previous works have taken steps toward evaluating privacy loss through poisoning attacks or membership inference, they have been tailored to specific models or have demonstrated low statistical power. Our work develops a general methodology to empirically evaluate the privacy of differentially private machine learning implementations, combining improved privacy search and verification methods with a toolkit of influence-based poisoning attacks. We demonstrate significantly improved auditing power over previous approaches on a variety of models including logistic regression, Naive Bayes, and random forest. Our method can be used to detect privacy violations due to implementation errors or misuse. When violations are not present, it can aid in understanding the amount of information that can be leaked from a given dataset, algorithm, and privacy specification.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/13/2020

Auditing Differentially Private Machine Learning: How Private is Private SGD?

We investigate whether Differentially Private SGD offers better privacy ...
research
03/16/2021

The Influence of Dropout on Membership Inference in Differentially Private Models

Differentially private models seek to protect the privacy of data the mo...
research
11/23/2020

Differentially Private Learning Needs Better Features (or Much More Data)

We demonstrate that differentially private machine learning has not yet ...
research
05/10/2021

Differentially Private Transfer Learning with Conditionally Deep Autoencoders

This paper considers the problem of differentially private semi-supervis...
research
03/22/2021

d3p – A Python Package for Differentially-Private Probabilistic Programming

We present d3p, a software package designed to help fielding runtime eff...
research
09/04/2023

Revealing the True Cost of Local Privacy: An Auditing Perspective

This paper introduces the LDP-Auditor framework for empirically estimati...
research
10/27/2020

A Members First Approach to Enabling LinkedIn's Labor Market Insights at Scale

We describe the privatization method used in reporting labor market insi...

Please sign up or login with your details

Forgot password? Click here to reset