Quantifying Membership Privacy via Information Leakage

10/12/2020
by   Sara Saeidian, et al.
0

Machine learning models are known to memorize the unique properties of individual data points in a training set. This memorization capability can be exploited by several types of attacks to infer information about the training data, most notably, membership inference attacks. In this paper, we propose an approach based on information leakage for guaranteeing membership privacy. Specifically, we propose to use a conditional form of the notion of maximal leakage to quantify the information leaking about individual data entries in a dataset, i.e., the entrywise information leakage. We apply our privacy analysis to the Private Aggregation of Teacher Ensembles (PATE) framework for privacy-preserving classification of sensitive data and prove that the entrywise information leakage of its aggregation mechanism is Schur-concave when the injected noise has a log-concave probability density. The Schur-concavity of this leakage implies that increased consensus among teachers in labeling a query reduces its associated privacy cost. Finally, we derive upper bounds on the entrywise information leakage when the aggregation mechanism uses Laplace distributed noise.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/17/2020

On Primes, Log-Loss Scores and (No) Privacy

Membership Inference Attacks exploit the vulnerabilities of exposing mod...
research
07/25/2023

A Comprehensive Analysis on the Leakage of Fuzzy Matchers

This paper provides a comprehensive analysis of information leakage duri...
research
01/29/2020

Modelling and Quantifying Membership Information Leakage in Machine Learning

Machine learning models have been shown to be vulnerable to membership i...
research
11/18/2019

Privacy Leakage Avoidance with Switching Ensembles

We consider membership inference attacks, one of the main privacy issues...
research
10/14/2020

Learning, compression, and leakage: Minimizing classification error via meta-universal compression principles

Learning and compression are driven by the common aim of identifying and...
research
05/09/2021

Bounding Information Leakage in Machine Learning

Machine Learning services are being deployed in a large range of applica...
research
06/09/2020

On the Effectiveness of Regularization Against Membership Inference Attacks

Deep learning models often raise privacy concerns as they leak informati...

Please sign up or login with your details

Forgot password? Click here to reset