Differentially Private Variational Autoencoders with Term-wise Gradient Aggregation

06/19/2020
by   Tsubasa Takahashi, et al.
0

This paper studies how to learn variational autoencoders with a variety of divergences under differential privacy constraints. We often build a VAE with an appropriate prior distribution to describe the desired properties of the learned representations and introduce a divergence as a regularization term to close the representations to the prior. Using differentially private SGD (DP-SGD), which randomizes a stochastic gradient by injecting a dedicated noise designed according to the gradient's sensitivity, we can easily build a differentially private model. However, we reveal that attaching several divergences increase the sensitivity from O(1) to O(B) in terms of batch size B. That results in injecting a vast amount of noise that makes it hard to learn. To solve the above issue, we propose term-wise DP-SGD that crafts randomized gradients in two different ways tailored to the compositions of the loss terms. The term-wise DP-SGD keeps the sensitivity at O(1) even when attaching the divergence. We can therefore reduce the amount of noise. In our experiments, we demonstrate that our method works well with two pairs of the prior distribution and the divergence.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/30/2021

NeuralDP Differentially private neural networks by design

The application of differential privacy to the training of deep neural n...
research
12/01/2021

Differentially Private SGD with Sparse Gradients

To protect sensitive training data, differentially private stochastic gr...
research
10/28/2022

DPVIm: Differentially Private Variational Inference Improved

Differentially private (DP) release of multidimensional statistics typic...
research
07/09/2021

Differentially private training of neural networks with Langevin dynamics for calibrated predictive uncertainty

We show that differentially private stochastic gradient descent (DP-SGD)...
research
10/22/2021

Differentially Private Coordinate Descent for Composite Empirical Risk Minimization

Machine learning models can leak information about the data used to trai...
research
07/27/2021

Learning Numeric Optimal Differentially Private Truncated Additive Mechanisms

Differentially private (DP) mechanisms face the challenge of providing a...
research
06/14/2020

Differentially Private Decentralized Learning

Decentralized learning has received great attention for its high efficie...

Please sign up or login with your details

Forgot password? Click here to reset