Log In Sign Up

Additive Logistic Mechanism for Privacy-Preserving Self-Supervised Learning

by   Yunhao Yang, et al.

We study the privacy risks that are associated with training a neural network's weights with self-supervised learning algorithms. Through empirical evidence, we show that the fine-tuning stage, in which the network weights are updated with an informative and often private dataset, is vulnerable to privacy attacks. To address the vulnerabilities, we design a post-training privacy-protection algorithm that adds noise to the fine-tuned weights and propose a novel differential privacy mechanism that samples noise from the logistic distribution. Compared to the two conventional additive noise mechanisms, namely the Laplace and the Gaussian mechanisms, the proposed mechanism uses a bell-shaped distribution that resembles the distribution of the Gaussian mechanism, and it satisfies pure ϵ-differential privacy similar to the Laplace mechanism. We apply membership inference attacks on both unprotected and protected models to quantify the trade-off between the models' privacy and performance. We show that the proposed protection algorithm can effectively reduce the attack accuracy to roughly 50%-equivalent to random guessing-while maintaining a performance loss below 5%.


page 1

page 2

page 3

page 4


Grafting Laplace and Gaussian distributions: A new noise mechanism for differential privacy

The framework of Differential privacy protects an individual's privacy w...

Introducing the Huber mechanism for differentially private low-rank matrix completion

Performing low-rank matrix completion with sensitive user data calls for...

MVG Mechanism: Differential Privacy under Matrix-Valued Query

Differential privacy mechanism design has traditionally been tailored fo...

Kantorovich Mechanism for Pufferfish Privacy

Pufferfish privacy achieves ϵ-indistinguishability over a set of secret ...

Differential privacy for symmetric log-concave mechanisms

Adding random noise to database query results is an important tool for a...

Precision-based attacks and interval refining: how to break, then fix, differential privacy on finite computers

Despite being raised as a problem over ten years ago, the imprecision of...

Differential Privacy at Risk: Bridging Randomness and Privacy Budget

The calibration of noise for a privacy-preserving mechanism depends on t...