Privacy Leakage Avoidance with Switching Ensembles

11/18/2019
by   Rauf Izmailov, et al.
0

We consider membership inference attacks, one of the main privacy issues in machine learning. These recently developed attacks have been proven successful in determining, with confidence better than a random guess, whether a given sample belongs to the dataset on which the attacked machine learning model was trained. Several approaches have been developed to mitigate this privacy leakage but the tradeoff performance implications of these defensive mechanisms (i.e., accuracy and utility of the defended machine learning model) are not well studied yet. We propose a novel approach of privacy leakage avoidance with switching ensembles (PASE), which both protects against current membership inference attacks and does that with very small accuracy penalty, while requiring acceptable increase in training and inference time. We test our PASE method, along with the the current state-of-the-art PATE approach, on three calibration image datasets and analyze their tradeoffs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/07/2023

Can Membership Inferencing be Refuted?

Membership inference (MI) attack is currently the most popular test for ...
research
05/12/2021

Accuracy-Privacy Trade-off in Deep Ensembles

Deep ensemble learning has been shown to improve accuracy by training mu...
research
10/12/2020

Quantifying Membership Privacy via Information Leakage

Machine learning models are known to memorize the unique properties of i...
research
02/28/2023

Membership Inference Attack for Beluga Whales Discrimination

To efficiently monitor the growth and evolution of a particular wildlife...
research
01/29/2020

Modelling and Quantifying Membership Information Leakage in Machine Learning

Machine learning models have been shown to be vulnerable to membership i...
research
03/17/2022

Leveraging Adversarial Examples to Quantify Membership Information Leakage

The use of personal data for training machine learning systems comes wit...
research
07/04/2023

Overconfidence is a Dangerous Thing: Mitigating Membership Inference Attacks by Enforcing Less Confident Prediction

Machine learning (ML) models are vulnerable to membership inference atta...

Please sign up or login with your details

Forgot password? Click here to reset