DeepAI
Log In Sign Up

Additive Logistic Mechanism for Privacy-Preserving Self-Supervised Learning

05/25/2022
by   Yunhao Yang, et al.
0

We study the privacy risks that are associated with training a neural network's weights with self-supervised learning algorithms. Through empirical evidence, we show that the fine-tuning stage, in which the network weights are updated with an informative and often private dataset, is vulnerable to privacy attacks. To address the vulnerabilities, we design a post-training privacy-protection algorithm that adds noise to the fine-tuned weights and propose a novel differential privacy mechanism that samples noise from the logistic distribution. Compared to the two conventional additive noise mechanisms, namely the Laplace and the Gaussian mechanisms, the proposed mechanism uses a bell-shaped distribution that resembles the distribution of the Gaussian mechanism, and it satisfies pure ϵ-differential privacy similar to the Laplace mechanism. We apply membership inference attacks on both unprotected and protected models to quantify the trade-off between the models' privacy and performance. We show that the proposed protection algorithm can effectively reduce the attack accuracy to roughly 50%-equivalent to random guessing-while maintaining a performance loss below 5%.

READ FULL TEXT

page 1

page 2

page 3

page 4

12/19/2022

Grafting Laplace and Gaussian distributions: A new noise mechanism for differential privacy

The framework of Differential privacy protects an individual's privacy w...
06/16/2022

Introducing the Huber mechanism for differentially private low-rank matrix completion

Performing low-rank matrix completion with sensitive user data calls for...
01/02/2018

MVG Mechanism: Differential Privacy under Matrix-Valued Query

Differential privacy mechanism design has traditionally been tailored fo...
01/19/2022

Kantorovich Mechanism for Pufferfish Privacy

Pufferfish privacy achieves ϵ-indistinguishability over a set of secret ...
02/23/2022

Differential privacy for symmetric log-concave mechanisms

Adding random noise to database query results is an important tool for a...
07/27/2022

Precision-based attacks and interval refining: how to break, then fix, differential privacy on finite computers

Despite being raised as a problem over ten years ago, the imprecision of...
03/02/2020

Differential Privacy at Risk: Bridging Randomness and Privacy Budget

The calibration of noise for a privacy-preserving mechanism depends on t...