Additive Logistic Mechanism for Privacy-Preserving Self-Supervised Learning

05/25/2022
by   Parham Gohari, et al.
0

We study the privacy risks that are associated with training a neural network's weights with self-supervised learning algorithms. Through empirical evidence, we show that the fine-tuning stage, in which the network weights are updated with an informative and often private dataset, is vulnerable to privacy attacks. To address the vulnerabilities, we design a post-training privacy-protection algorithm that adds noise to the fine-tuned weights and propose a novel differential privacy mechanism that samples noise from the logistic distribution. Compared to the two conventional additive noise mechanisms, namely the Laplace and the Gaussian mechanisms, the proposed mechanism uses a bell-shaped distribution that resembles the distribution of the Gaussian mechanism, and it satisfies pure ϵ-differential privacy similar to the Laplace mechanism. We apply membership inference attacks on both unprotected and protected models to quantify the trade-off between the models' privacy and performance. We show that the proposed protection algorithm can effectively reduce the attack accuracy to roughly 50%-equivalent to random guessing-while maintaining a performance loss below 5%.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/19/2022

Grafting Laplace and Gaussian distributions: A new noise mechanism for differential privacy

The framework of Differential privacy protects an individual's privacy w...
research
02/07/2023

Differential Privacy with Higher Utility through Non-identical Additive Noise

Differential privacy is typically ensured by perturbation with additive ...
research
06/16/2022

Introducing the Huber mechanism for differentially private low-rank matrix completion

Performing low-rank matrix completion with sensitive user data calls for...
research
01/02/2018

MVG Mechanism: Differential Privacy under Matrix-Valued Query

Differential privacy mechanism design has traditionally been tailored fo...
research
01/19/2022

Kantorovich Mechanism for Pufferfish Privacy

Pufferfish privacy achieves ϵ-indistinguishability over a set of secret ...
research
03/02/2020

Differential Privacy at Risk: Bridging Randomness and Privacy Budget

The calibration of noise for a privacy-preserving mechanism depends on t...
research
06/15/2023

Training generative models from privatized data

Local differential privacy (LDP) is a powerful method for privacy-preser...

Please sign up or login with your details

Forgot password? Click here to reset