Differentially Private Generation of Small Images

by   Justus T. C. Schwabedal, et al.

We explore the training of generative adversarial networks with differential privacy to anonymize image data sets. On MNIST, we numerically measure the privacy-utility trade-off using parameters from ϵ-δ differential privacy and the inception score. Our experiments uncover a saturated training regime where an increasing privacy budget adds little to the quality of generated images. We also explain analytically why differentially private Adam optimization is independent of the gradient clipping parameter. Furthermore, we highlight common errors in previous works on differentially private deep learning, which we uncovered in recent literature. Throughout the treatment of the subject, we hope to prevent erroneous estimates of anonymity in the future.



There are no comments yet.


page 1

page 2

page 3

page 4


Differentially Private Policy Evaluation

We present the first differentially private algorithms for reinforcement...

Generating Differentially Private Datasets Using GANs

In this paper, we present a technique for generating artificial datasets...

Robust and Private Learning of Halfspaces

In this work, we study the trade-off between differential privacy and ad...

A Differentially Private Bayesian Approach to Replication Analysis

Replication analysis is widely used in many fields of study. Once a rese...

Concentrated Differentially Private Gradient Descent with Adaptive per-Iteration Privacy Budget

Iterative algorithms, like gradient descent, are common tools for solvin...

dpUGC: Learn Differentially Private Representation for User Generated Contents

This paper firstly proposes a simple yet efficient generalized approach ...

A Fully Private Pipeline for Deep Learning on Electronic Health Records

We introduce an end-to-end private deep learning framework, applied to t...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Differential privacy is the de-facto standard for anonymized release of statistical measurements (Dwork et al., 2014)

. The method information-theoretically limits how much of any individual example in a data set is leaked into the released statistics. Risks of privacy infringement remain therefore bounded independent of the infringement method. This independence is key as increasingly powerful expert systems are trained on cross-referenced statistics, which may lead to accidental, inconceivably intricate, and hard-to-detect infringements. In this work, we explore a method able to anonymize not only derived statistical measurements, but the underlying raw data set with the same differential-privacy guarantees. We follow others in training generative adversarial neural networks (GANs) with differential privacy.

Abadi et al. (2016)

proposed a method of training neural networks with differential privacy in their seminal paper. The authors proposed to make the gradient computations of stochastic gradient descent (SGD) a randomized mechanism by clipping the L-2 norm of parameter gradients in each example, and by adding random Gaussian noise to the gradients. Using the same method, we show that it is possible to train generative adversarial networks (GANs) on high-dimensional images (for an overview see

Gui et al. (2020)). Their generator networks can then be used to synthesize image data that are of high utility for further processing, but have guaranteed privacy for the original data source.

Privacy-preserving training of GANs with DP-SGD has been attempted recently. Beaulieu-Jones et al. (2017) trained a generator for labeled blood-pressure trajectories to synthesize anonymous samples from the SPRINT trial using the AC-GAN approach. Zhang et al. (2018) devised the Wasserstein GAN with gradient penalty to train a generator for MNIST and CIFAR10. To improve training with privacy, they grouped parameters according to their gradients and adjusted the clipping boundary for each group. Xie et al. (2018)

use the original Wasserstein GAN procedure wherein the critic’s (discriminator’s) parameters are clipped. This in turn ensures that gradients are bounded, thus fulfilling the criteria to compute privacy bounds from the noise variance added to the gradients.

In the remaining, we discuss the prior works cited above and point out some important fallacies. We then present or own analysis of differentially private synthetic data generated from the MNIST data set. We discuss how DP parameters affect the quality of generated images, which we measure using the inception score.

2 Methods

2.1 Differentially private stochastic gradient descent

A randomized mechanism satisfies ()-differential privacy if the following inequality holds for any adjacent pair of data sets and :


The Gaussian randomized mechanism adds to an -dimensional mechanism random Gaussian noise with variance , wherein is the L-2 sensitivity of the mechanism across neighboring data sets, and

is the noise multiplier. Drawing vector-valued independent Gaussian variates

, the mechanism is defined as


In the training of neural networks with stochastic gradient descent, the mechanism is the computation of parameter gradient update , i.e. the mean over parameter gradients from samples across a mini-batch :


Only one gradient differs in Eq. (3) between adjacent data sets. Let us suppose we can limit the L-2 norm of individual gradients to . Then has an L-2 sensitivity of across adjacent data sets.

Based upon this theory, Abadi et al. (2016) has formulated differentially private SGD, which we reproduce in Algo. (1). Note, that the gradient’s L-2 norm of each example is clipped individually (line 5), and that independent and vector-valued Gaussian variates

are added (line 6). One may replace the simple descent step (line 7) with any higher order, or moment based algorithm of gradient descent such as RMSprop or Adam, for example

(Tieleman et al., 2012, Kingma and Ba, 2015).

Input: Examples , neural-network parameters

, loss function

, learning rate , noise scale , gradient norm bound , random noise
Output: Differentially private parameters
1 for  do
2       Sample a random minibatch for  do
3             Compute
Algorithm 1 Differentially private SGD (Abadi et al., 2016)

2.2 Differentially private generative adversarial training

Zhang et al. (2018) observed that, when training a generative adversarial network for data release, the Gaussian mechanism can be confined either to the generator or to the critic. They argue that applying the Gaussian mechanism only to the critic permits batch-right normalization techniques to be used in the generator. In Algo. (2), we reproduce their differentially private Wasserstein GAN with gradient penalty. Note the similarity between lines 9 and 10 in the critic step, and lines 5 and 6 in Algo. (1). These lines are responsible for implementing the Gaussian randomized mechanism. Also, note that the generator training step does neither involve gradient clipping nor random perturbations. It does also not depend on the data set as long as the schedule updating the learning rate is differentially private. This includes early stopping.

Input: Examples , neural-network parameters and , learning rate , noise scale , gradient norm bound , random noise
Output: Differentially private parameters
1 for  do
2       for  do
             /* differentially private critic step */
3             Sample a random minibatch for  do
4                   Sample from , and
      /* non-private generator step */
6       Sample instances of from
Algorithm 2 Differentially private WGAN-DP (Zhang et al., 2018)

2.3 Quantifying the loss of privacy

Differentially private training of neural nets was formulated using the Gaussian mechanism, but its application is only useful if we are able to estimate tight upper bounds for the privacy lost. This loss is quantified by the parameters and from Eq. (1). Such an upper bound was derived by Mironov (2017) and Mironov et al. (2019), in which they use the theory of Rényi differential privacy (RDP). Their analysis is technically complicated, and we will only sketch its elements here.

Rényi differential privacy (RDP) is formulated in terms of the Rényi divergence of two probability distributions

and :


A mechanism fulfills -RDP if, for all neighboring data sets and , obeys the inequality:


Mironov (2017) also linked the two definitions of differential privacy (1) and (5): Each mechanism satisfying -RDP also satisfies -DP with


One is free to choose . Then, one typically chooses the that minimizes .

Mironov et al. (2019) give the details on how to compute for one step of the sampled Gaussian randomized mechanism, stochastic gradient descent. Across multiple steps, the epsilons add linearly. Adding the values for of each optimization step, we compute the privacy budget in terms of and and convert it to and using Eq. (6). In sum, this gives us an upper bound for -DP for our GAN training, that limits the privacy leaked from the data set into the generator.

2.4 MNIST data set

The MNIST dataset contains 70,000 labeled images of digits. We used the 60,000 examples of its training data set to train GANs. To optimize the classifier for the computation of inception scores, we also used the 10,000 examples in the test set as a validation set.

2.5 Measuring the quality of generated images

To assess the quality of generated images, we adopt the inception score (IS): A classifier with classes generates a probability distribution when applied to examples of a data set . The conditional probability is related to the marginal

using the Kullback-Leibler divergence:


The inception score is defined as the exponential of the mean Kullback-Leibler divergence:


Note that the IS framework requires a classifier of high quality. The score takes values between and the number of classes . It has been argued that the IS correlates well with subjective image quality because of a subjective bias towards class-distinguishing image features (Salimans et al., 2016).

2.6 Architecture of the generative adversarial network

We train Wasserstein generative adversarial networks using gradient penalty with Adam optimization. The critic (discriminator) consists of three stridden convolutional layers with leaky ReLU activation functions of negative slope

. The first layer has a number of filters, the capacity, with a kernel size of . The number of filters doubles with each convolutional layer. The generator starts with a

-dimensional Gaussian latent space that is processed by three convolutions that transpose the structure of the critic. Padding is chosen to match the 28-by-28 pixel images of MNIST. The network is trained in batches using Adam with


(default values in PyTorch and Tensorflow).

3 Results

3.1 Review of prior work

Differentially private stochastic gradient descent has been previously used to train generative adversarial networks (Beaulieu-Jones et al., 2017, Zhang et al., 2018, Xie et al., 2018). Beaulieu-Jones et al. (2017) use the original GAN algorithm with a binary classification in the classifier. They clip the gradients after averaging, but not the parameters (W-GAN step). In Methods, they write:

… we limit the maximum distance of any of these [optimization] steps and then add a small amount of random noise.

As we outlined in Sec. 2.1, limiting each step, i.e. clipping the average gradient, is not sufficient to grant differential privacy to each example. Each contribution to the gradient needs to be clipped individually. At the time of writing, this error also conforms with their published code (github.com/greenelab/SPRINT_gan). Results presented by Beaulieu-Jones et al. (2017) may still be correct because the authors use a batch size of one throughout the paper.

In the public code repository implementing the experiments reported by Zhang et al. (2018), we have encountered a stray factor in the computation of the noise. At a batch-size of 64, the noise is, therefore, a factor eight too small to grant the differential privacy constraints computed from the reported clipping and noise multiplier. Furthermore, the clipping is performed in a non-standard way: the authors group sets of parameters – e.g. all biases – and clip their gradients’ L-2 norm separately. The difference to regular clipping is probably proportional to the number of groups. The clipped gradients are, therefore, about a factor of about 10 larger than when clipped in the standard way. These programming and conceptual errors leave the applied privacy analysis inapplicable.

Xie et al. (2018) use the original W-GAN algorithm in which parameters are clipped. They go on to show that bounded network parameters, images, and classifications result in bounded parameter gradients thus fulfilling the DP conditions. They compute the bound , and use it for their DP algorithm. Unfortunately, the authors neither write how they scale noise with , nor do they publish their algorithm in the public repository (github.com/illidanlab/dpgan).

3.2 Empirical exploration of differentially private synthetic data generated from MNIST

To evaluate the inception score, we trained a classifier on the ten classes of the MNIST training data set. On the test set, our trained classifier achieved an accuracy of 99.6%, which is comparable to the state of the art for an individual neural-network classifier. We also computed the inception score of the original data sets and achieved (train set), and (test set).

We used this classifier to compute the inception scores reported in the following sections. Specifically, we describe our findings of how the gradient clipping, the noise multiplier, and the network capacity affected the privacy-utility relationship between privacy parameter , and inception score IS. In the computation of , we set to align our results with other publications that use the same value. Note that there is still no consensus how to choose , optimally. In all figures, we counterposed the results with a non-anonymous GAN training. We mark its maximum inception score, i.e. , by a horizontal dashed line. All experiments were done with the public code repository at ”github.com/jusjusjus/noise-in-dpsgd-2020/tree/v1.0.0”.

3.2.1 Adam is almost independent of gradient clipping

For Adam optimization, ”[…] the magnitudes of parameter updates are invariant to rescaling of the gradient” (Kingma and Ba, 2015). Specifically, Adam tracks running means of the value and square of incoming averaged gradients at time step . The parameter update is normalized by these running means (some details omitted for clarity):


Let us choose smaller than all individual gradient L-2 norms throughout the whole training process. This is possible if we assume that the gradients are non-zero. Then we can rewrite , wherein is independent of . After entering the expression for in Eqns. (3.2.1), cancels if we replace and . For such small values of , the training becomes -independent due to the normalization property of Adam optimization.

Figure 1: Privacy-utility plot for different gradient L2-norm clips . For and we observed comparable IS, whereas for , IS as a function of was systematically reduced.

Adam’s rescaling property, thus, divides L-2 clipping constant into two regimes set by the smallest gradient norm in the data set. (i) For smaller values of , all gradients are clipped equally. (ii) For larger values, the individual gradient norm weights the gradient sum over the mini-batch.

Empirically we found that regime (i), in which is chosen arbitrarily small, showed the largest inception score. Larger values of led to smaller values of the inception score. We choose in the following, which was below the per-example gradient norm in our experiments.

3.2.2 Critically large noise multipliers

The noise multiplier

is inversely related to the signal-to-noise ratio in the gradients. One may, therefore, expect that large

are detrimental to learning as optimal parameter values are escaped by random perturbations.

We trained generators with DP-Adam at , for a variety of values of while monitoring their inception score and (Eq. (1)). Throughout training, the score increased initially then to approach a maximal level. We found that the maximal inception score showed little dependence on values of reaching about . For , the score showed a steep break-off only reaching about at optimal levels of the privacy budget (cf. Fig. 2).

Herein we observe the existence of a critical noise multiplier beyond which a privacy-utility trade-off will likely be sub-optimal, as we discuss further below.

Figure 2: Privacy-utility plot for different noise multipliers . At , we observed a plateau of IS that was mostly shifted to larger for smaller . At , we observed another regimen in which training led to much lower values of IS with shallow gains during continued training.

3.2.3 Optimal network capacity

We trained generators with DP-SGD at a variety of network capacities while monitoring their inception score and privacy loss (Eq. (1)). The score approached a capacity-dependent maximum with increasing steps. We found that the maximal inception score was biggest for an intermediate network capacity of , while larger and small capacities reached lower levels.

Figure 3: Privacy-utility plot for different network capacities. We observed a consistent maximum for a network capacity of compared with the other tested capacities.

4 Discussion

Generative anonymization of data through differential privacy promises a quantifiable trade-off between the protection of individual privacy and the ability to use raw data sets for machine learning. In this article, we explore the training of generative adversarial networks for image data under differential privacy constraints. We found that some of the previous articles that explored this option showed technical and conceptual flaws

(Zhang et al., 2018, Beaulieu-Jones et al., 2017). In our introduction we provided a detailed workup of differential privacy in deep learning, which we hope will further clarify this intricate mathematical theory for future researchers.

It is in the interest of a practitioner to maximize the utility of the generator while staying within a specific privacy budget. We explored this privacy-utility trade-off in the MNIST data set111Our work reproduces experiments initially published by Zhang et al. (2018). We do not compare these results to ours because of the aforementioned errors.. We uncovered two modes of DP-GAN training showing distinct characteristic dependencies between privacy loss and image utility; an optimal one in which the utility steeply increased with spent privacy eventually reaching a plateau, and a sub-optimal one, wherein the slope was shallow (cf. Fig. 1 at and Fig. 2 at

). Sub-optimal privacy-utility characteristics crossed through optimal ones at low levels of the utility. On the other hand, the plateau in optimal characteristics did not seem to dependent on the hyperparameters. In Fig. 

2, for example, we observed a stable plateau over an order of magnitude in , and for different values of the noise multiplier. We also found that the break-off between optimal and sub-optimal characteristics was abrupt, wherein increases in or let to a sudden change in the observed characteristics. We hypothesize that too small signal-to-noise ratios in anonymous gradient updates make the stochastic optimization process unable to uncover minima in the parameter space. The hypothesis is consistent with break-offs upon increases in and . It is also consistent with the large fluctuations in utility present in sub-optimal characteristics.

We also explained analytically why DP-GAN training with Adam becomes gradient-clipping independent for small values of the clipping constant. This result generalizes to other methods of gradient descent with normalization. Furthermore, we found a weak dependence of the utility plateau on the network capacity. In the non-DP case, small capacities show reduced expressivity in the generator thus degrading the utility. A reduction was not visible, however, at increased capacities (computations not shown). When training with differential privacy, we found that increased capacity led to a systematic decrease in utility. In these high-capacity networks, more terms enter the gradient L-2 norm thus enhancing the effect of the clipping. We hypothesize that this is the main mechanism leading to a degraded utility in larger networks.

Our simple explorations are limited by the approximations with which we explore the privacy-utility relationship. Privacy was indirectly measured as an upper privacy bound, and utility was indirectly measured with the inception score. In future works, one should complement these metrics with direct measures of privacy through membership-inference attacks and classification scores on generated images, for example.

In applications, privacy-utility plots could be a useful tool to tune parameters in a data anonymization workflow. The method needs to be carefully adopted, however, because additional privacy loss incurs during hyperparameter optimization (Abadi et al., 2016), and the auxiliary classifier network we used needs to be trained with differential privacy as well.

In sum, we found that it is indeed possible to generate differentially private synthetic data set within a moderate privacy budget of . However, we presume that the reduced utility for parameter-rich networks will be a major hurdle when training DP-GANs for larger, more nuanced image data sets than MNIST. This could become a problem in particular, when close to a sub-optimal training regime in the signal-to-noise ratio of parameter gradients.

Acknowledgements. We thank Dr. Ilya Mironov for guiding our understanding of privacy-preserving deep learning. This work was funded by the European Union and the German Ministry for Economic Affairs and Energy under the EXIST grant program.


  • M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, and L. Zhang (2016) Deep Learning with Differential Privacy. External Links: Document, 1607.00133, Link Cited by: §1, §2.1, §4, 1.
  • B. K. Beaulieu-Jones, Z. S. Wu, C. Williams, and C. S. Greene (2017) Privacy-preserving generative deep neural networks support clinical data sharing. bioRxiv, pp. 159756. External Links: Document Cited by: §1, §3.1, §4.
  • C. Dwork, A. Roth, C. Dwork, and A. Roth (2014) The Algorithmic Foundations of Differential Privacy. Foundations and Trends⃝ in Theoretical Computer Science 9, pp. 211–407. External Links: Document Cited by: §1.
  • J. Gui, Z. Sun, Y. Wen, D. Tao, and J. Ye (2020) A review on generative adversarial networks: algorithms, theory, and applications. arXiv preprint arXiv:2001.06937. Cited by: §1.
  • D. P. Kingma and J. L. Ba (2015) Adam: A method for stochastic optimization. 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings, pp. 1–15. External Links: 1412.6980 Cited by: §2.1, §3.2.1.
  • I. Mironov, K. Talwar, and L. Zhang (2019) R’enyi Differential Privacy of the Sampled Gaussian Mechanism. pp. 1–14. External Links: 1908.10530, Link Cited by: §2.3, §2.3.
  • I. Mironov (2017) Renyi differential privacy. CoRR abs/1702.07476. External Links: Link, 1702.07476 Cited by: §2.3, §2.3.
  • T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen (2016) Improved techniques for training GANs. In Advances in Neural Information Processing Systems, External Links: 1606.03498, ISSN 10495258, Link Cited by: §2.5.
  • T. Tieleman, G. E. Hinton, N. Srivastava, and K. Swersky (2012) Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning. External Links: Document, arXiv:0911.2312v2, ISBN 978-1-4244-4842-5, ISSN 19386451 Cited by: §2.1.
  • L. Xie, K. Lin, S. Wang, F. Wang, and J. Zhou (2018) Differentially Private Generative Adversarial Network. External Links: 1802.06739, Link Cited by: §1, §3.1, §3.1.
  • X. Zhang, S. Ji, and T. Wang (2018) Differentially Private Releasing via Deep Generative Model. External Links: 1801.01594, Link Cited by: §1, §2.2, §3.1, §3.1, §4, 2, footnote 1.