Bounding Information Leakage in Machine Learning

05/09/2021
by   Ganesh Del Grosso, et al.
0

Machine Learning services are being deployed in a large range of applications that make it easy for an adversary, using the algorithm and/or the model, to gain access to sensitive data. This paper investigates fundamental bounds on information leakage. First, we identify and bound the success rate of the worst-case membership inference attack, connecting it to the generalization error of the target model. Second, we study the question of how much sensitive information is stored by the algorithm about the training set and we derive bounds on the mutual information between the sensitive attributes and model parameters. Although our contributions are mostly of theoretical nature, the bounds and involved concepts are of practical relevance. Inspired by our theoretical analysis, we study linear regression and DNN models to illustrate how these bounds can be used to assess the privacy guarantees of ML models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/29/2020

Modelling and Quantifying Membership Information Leakage in Machine Learning

Machine learning models have been shown to be vulnerable to membership i...
research
02/17/2020

Data and Model Dependencies of Membership Inference Attack

Machine Learning (ML) techniques are used by most data-driven organisati...
research
02/23/2021

Measuring Data Leakage in Machine-Learning Models with Fisher Information

Machine-learning models contain information about the data they were tra...
research
10/12/2020

Quantifying Membership Privacy via Information Leakage

Machine learning models are known to memorize the unique properties of i...
research
05/29/2019

Ultimate Power of Inference Attacks: Privacy Risks of High-Dimensional Models

Models leak information about their training data. This enables attacker...
research
12/14/2021

Generalization Bounds for Stochastic Gradient Langevin Dynamics: A Unified View via Information Leakage Analysis

Recently, generalization bounds of the non-convex empirical risk minimiz...
research
11/08/2019

Theoretical Guarantees for Model Auditing with Finite Adversaries

Privacy concerns have led to the development of privacy-preserving appro...

Please sign up or login with your details

Forgot password? Click here to reset