Image Classifiers Leak Sensitive Attributes About Their Classes

03/16/2023
by   Lukas Struppek, et al.
1

Neural network-based image classifiers are powerful tools for computer vision tasks, but they inadvertently reveal sensitive attribute information about their classes, raising concerns about their privacy. To investigate this privacy leakage, we introduce the first Class Attribute Inference Attack (Caia), which leverages recent advances in text-to-image synthesis to infer sensitive attributes of individual classes in a black-box setting, while remaining competitive with related white-box attacks. Our extensive experiments in the face recognition domain show that Caia can accurately infer undisclosed sensitive attributes, such as an individual's hair color, gender and racial appearance, which are not part of the training labels. Interestingly, we demonstrate that adversarial robust models are even more vulnerable to such privacy leakage than standard models, indicating that a trade-off between robustness and privacy exists.

READ FULL TEXT

page 15

page 27

page 29

page 36

page 37

page 38

page 39

page 40

research
06/01/2023

Does Black-box Attribute Inference Attacks on Graph Neural Networks Constitute Privacy Risk?

Graph neural networks (GNNs) have shown promising results on real-life d...
research
05/28/2019

Overlearning Reveals Sensitive Attributes

`Overlearning' means that a model trained for a seemingly simple objecti...
research
05/25/2021

Honest-but-Curious Nets: Sensitive Attributes of Private Inputs can be Secretly Coded into the Entropy of Classifiers' Outputs

It is known that deep neural networks, trained for the classification of...
research
01/28/2023

Semantic Adversarial Attacks on Face Recognition through Significant Attributes

Face recognition is known to be vulnerable to adversarial face images. E...
research
06/12/2020

Dataset-Level Attribute Leakage in Collaborative Learning

Multi-party machine learning allows several parties to build a joint mod...
research
06/05/2021

Variational Leakage: The Role of Information Complexity in Privacy Leakage

We study the role of information complexity in privacy leakage about an ...
research
06/29/2020

Reducing Risk of Model Inversion Using Privacy-Guided Training

Machine learning models often pose a threat to the privacy of individual...

Please sign up or login with your details

Forgot password? Click here to reset