Property Inference Attacks on Convolutional Neural Networks: Influence and Implications of Target Model's Complexity

04/27/2021
by   Mathias P. M. Parisot, et al.
0

Machine learning models' goal is to make correct predictions for specific tasks by learning important properties and patterns from data. By doing so, there is a chance that the model learns properties that are unrelated to its primary task. Property Inference Attacks exploit this and aim to infer from a given model (the target model) properties about the training dataset seemingly unrelated to the model's primary goal. If the training data is sensitive, such an attack could lead to privacy leakage. This paper investigates the influence of the target model's complexity on the accuracy of this type of attack, focusing on convolutional neural network classifiers. We perform attacks on models that are trained on facial images to predict whether someone's mouth is open. Our attacks' goal is to infer whether the training dataset is balanced gender-wise. Our findings reveal that the risk of a privacy breach is present independently of the target model's complexity: for all studied architectures, the attack's accuracy is clearly over the baseline. We discuss the implication of the property inference on personal data in the light of Data Protection Regulations and Guidelines.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/25/2022

SNAP: Efficient Extraction of Private Properties with Poisoning

Property inference attacks allow an adversary to extract global properti...
research
11/08/2022

Inferring Class Label Distribution of Training Data from Classifiers: An Accuracy-Augmented Meta-Classifier Attack

Property inference attacks against machine learning (ML) models aim to i...
research
12/16/2021

Dataset correlation inference attacks against machine learning models

Machine learning models are increasingly used by businesses and organiza...
research
12/08/2022

Vicious Classifiers: Data Reconstruction Attack at Inference Time

Privacy-preserving inference via edge or encrypted computing paradigms e...
research
06/07/2022

Data Stealing Attack on Medical Images: Is it Safe to Export Networks from Data Lakes?

In privacy-preserving machine learning, it is common that the owner of t...
research
06/07/2021

Formalizing Distribution Inference Risks

Property inference attacks reveal statistical properties about a trainin...
research
06/08/2023

PriSampler: Mitigating Property Inference of Diffusion Models

Diffusion models have been remarkably successful in data synthesis. Such...

Please sign up or login with your details

Forgot password? Click here to reset