Segmentations-Leak: Membership Inference Attacks and Defenses in Semantic Image Segmentation

12/20/2019
by   Yang He, et al.
0

Today's success of state of the art methods for semantic segmentation is driven by large datasets. Data is considered an important asset that needs to be protected, as the collection and annotation of such datasets comes at significant efforts and associated costs. In addition, visual data might contain private or sensitive information, that makes it equally unsuited for public release. Unfortunately, recent work on membership inference in the broader area of adversarial machine learning and inference attacks on machine learning models has shown that even black box classifiers leak information on the dataset that they were trained on. We present the first attacks and defenses for complex, state of the art models for semantic segmentation. In order to mitigate the associated risks, we also study a series of defenses against such membership inference attacks and find effective counter measures against the existing risks. Finally, we extensively evaluate our attacks and defenses on a range of relevant real-world datasets: Cityscapes, BDD100K, and Mapillary Vistas.

READ FULL TEXT
research
12/02/2022

Membership Inference Attacks Against Semantic Segmentation Models

Membership inference attacks aim to infer whether a data record has been...
research
05/24/2019

Privacy Risks of Securing Machine Learning Models against Adversarial Examples

The arms race between attacks and defenses for machine learning models h...
research
02/01/2020

Politics of Adversarial Machine Learning

In addition to their security properties, adversarial machine-learning a...
research
03/14/2021

Membership Inference Attacks on Machine Learning: A Survey

Membership inference attack aims to identify whether a data sample was u...
research
04/11/2019

Membership Inference Attacks on Sequence-to-Sequence Models

Data privacy is an important issue for "machine learning as a service" p...
research
02/04/2021

ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models

Inference attacks against Machine Learning (ML) models allow adversaries...
research
06/21/2022

Natural Backdoor Datasets

Extensive literature on backdoor poison attacks has studied attacks and ...

Please sign up or login with your details

Forgot password? Click here to reset