Membership Inference Attacks Against Semantic Segmentation Models

12/02/2022
by   Tomáš Chobola, et al.
0

Membership inference attacks aim to infer whether a data record has been used to train a target model by observing its predictions. In sensitive domains such as healthcare, this can constitute a severe privacy violation. In this work we attempt to address the existing knowledge gap by conducting an exhaustive study of membership inference attacks and defences in the domain of semantic image segmentation. Our findings indicate that for certain threat models, these learning settings can be considerably more vulnerable than the previously considered classification settings. We additionally investigate a threat model where a dishonest adversary can perform model poisoning to aid their inference and evaluate the effects that these adaptations have on the success of membership inference attacks. We quantitatively evaluate the attacks on a number of popular model architectures across a variety of semantic segmentation tasks, demonstrating that membership inference attacks in this domain can achieve a high success rate and defending against them may result in unfavourable privacy-utility trade-offs or increased computational costs.

READ FULL TEXT

page 3

page 7

research
11/21/2019

Effects of Differential Privacy and Data Skewness on Membership Inference Vulnerability

Membership inference attacks seek to infer the membership of individual ...
research
02/15/2021

Reconstruction-Based Membership Inference Attacks are Easier on Difficult Problems

Membership inference attacks (MIA) try to detect if data samples were us...
research
12/20/2019

Segmentations-Leak: Membership Inference Attacks and Defenses in Semantic Image Segmentation

Today's success of state of the art methods for semantic segmentation is...
research
06/04/2018

ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models

Machine learning (ML) has become a core component of many real-world app...
research
02/20/2019

Under the Hood of Membership Inference Attacks on Aggregate Location Time-Series

Aggregate location statistics are used in a number of mobility analytics...
research
12/03/2022

LDL: A Defense for Label-Based Membership Inference Attacks

The data used to train deep neural network (DNN) models in applications ...
research
06/01/2023

TMI! Finetuned Models Leak Private Information from their Pretraining Data

Transfer learning has become an increasingly popular technique in machin...

Please sign up or login with your details

Forgot password? Click here to reset