On Primes, Log-Loss Scores and (No) Privacy

09/17/2020
by   Abhinav Aggarwal, et al.
4

Membership Inference Attacks exploit the vulnerabilities of exposing models trained on customer data to queries by an adversary. In a recently proposed implementation of an auditing tool for measuring privacy leakage from sensitive datasets, more refined aggregates like the Log-Loss scores are exposed for simulating inference attacks as well as to assess the total privacy leakage based on the adversary's predictions. In this paper, we prove that this additional information enables the adversary to infer the membership of any number of datapoints with full accuracy in a single query, causing complete membership privacy breach. Our approach obviates any attack model training or access to side knowledge with the adversary. Moreover, our algorithms are agnostic to the model under attack and hence, enable perfect membership inference even for models that do not memorize or overfit. In particular, our observations provide insight into the extent of information leakage from statistical aggregates and how they can be exploited.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/22/2023

Do Backdoors Assist Membership Inference Attacks?

When an adversary provides poison samples to a machine learning model, p...
research
10/02/2020

Quantifying Privacy Leakage in Graph Embedding

Graph embeddings have been proposed to map graph data to low dimensional...
research
10/12/2020

Quantifying Membership Privacy via Information Leakage

Machine learning models are known to memorize the unique properties of i...
research
11/10/2022

On the Privacy Risks of Algorithmic Recourse

As predictive models are increasingly being employed to make consequenti...
research
06/09/2020

On the Effectiveness of Regularization Against Membership Inference Attacks

Deep learning models often raise privacy concerns as they leak informati...
research
09/18/2022

Distribution inference risks: Identifying and mitigating sources of leakage

A large body of work shows that machine learning (ML) models can leak se...
research
08/01/2022

On the Evaluation of User Privacy in Deep Neural Networks using Timing Side Channel

Recent Deep Learning (DL) advancements in solving complex real-world tas...

Please sign up or login with your details

Forgot password? Click here to reset