Differentially Private Generative Adversarial Network

02/19/2018 ∙ by Liyang Xie, et al. ∙ 0

Generative Adversarial Network (GAN) and its variants have recently attracted intensive research interests due to their elegant theoretical foundation and excellent empirical performance as generative models. These tools provide a promising direction in the studies where data availability is limited. One common issue in GANs is that the density of the learned generative distribution could concentrate on the training data points, meaning that they can easily remember training samples due to the high model complexity of deep networks. This becomes a major concern when GANs are applied to private or sensitive data such as patient medical records, and the concentration of distribution may divulge critical patient information. To address this issue, in this paper we propose a differentially private GAN (DPGAN) model, in which we achieve differential privacy in GANs by adding carefully designed noise to gradients during the learning procedure. We provide rigorous proof for the privacy guarantee, as well as comprehensive empirical evidence to support our analysis, where we demonstrate that our method can generate high quality data points at a reasonable privacy level.

READ FULL TEXT VIEW PDF

Authors

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

In recent years, more and more data in different application domains are becoming readily available for the rapid development of both computer hardware and software technologies. Many data mining methodologies have been developed for analyzing those big data sets. One representative example is deep learning, which typically needs a huge amount of training samples to achieve promising performance. However, there exists domains where it is impossible to get as much data as we want. Medicine and Health Informatics are such fields. On individual patient level analysis, each patient is treated as a sample in model training process. However, considering the complexity of many diseases, the number of all patients from the whole world is still very small and far from enough. Moreover, we can never get the medical data from all patients for privacy and sensitivity reasons. Further, the expensive and time-consuming data collection process also limits the amount of data. Thus, the problem of building high-quality medical analytics models remains very challenging at present.

Generative models (Makhzani et al., 2015; Rezende et al., 2014; Mescheder et al., 2017; Burda et al., 2015; Li et al., 2015)

have provided us a promising direction to alleviate the data scarcity issue. By sketching the data distribution from a small set of training data, we are able to sample from the distribution and generate much more samples for our study. By combining the complexity of deep neural networks and game theory, the Generative Adversarial Network (GAN) 

(Goodfellow et al., 2014) and its variants have demonstrated impressive performance in modeling the underlying data distribution, generating high quality “fake” samples that are hard to be differentiated from real ones (Salimans et al., 2016; Saito and Matsumoto, 2016; Mogren, 2016). Ideally, with the high quality generative distribution in hand, we can protect the privacy of raw data by releasing only the distribution instead of the raw data to the public or constrained individuals, and can even sample datasets to fit our needs and conduct further analysis.

However, the GANs can still implicitly disclose privacy information of the training samples. The adversarial training procedure and the high model complexity of deep neural networks, jointly encourage a distribution that is concentrated around training samples. By repeated sampling from the distribution, there is a considerable chance of recovering the training samples (Arjovsky et al., 2017). For example, Hitaj etal. (Hitaj et al., 2017) introduced an active inference attack model that can reconstruct training samples from the generated ones. Therefore, it is highly demanded to have generative models that not only generates high quality samples but also protects the privacy of the training data.

With the above considerations, in this paper we propose a Differentially Private Generative Adversarial Network (DPGAN). DPGAN provides proven privacy control for the training data from the sense of differential privacy (Dwork and Roth, 2013)

. Specifically, our proposed framework applies a combination of carefully designed noise and gradient clipping, and uses the

Wasserstein distance (Arjovsky et al., 2017)

as an approximation of the distance between probability distributions, which is a more reasonable metric than JS- divergence in GAN. There are also prior works on studying differential privacy in deep learning models 

(Abadi et al., 2016). However, our DPGAN is different from (Abadi et al., 2016)

by clipping only on weights. We also proves that the gradient can be bounded at same time, which avoids unnecessary distortion of the gradient. This not only keeps the loss function with Lipschitz property but also provides a sufficient privacy guarantee. Unlike the privacy preserving deep framework mentioned in 

(Papernot et al., 2017), whose privacy loss is proportional to the amount of data needed to be labeled in public data set, the privacy loss of our DPGAN is irrelevant to the amount of generated data. This makes our methods applicable under a wide variety of real world scenarios. We evaluate DPGAN under various benchmark datasets and network structures (fully connected networks and CNN), and demonstrate that DPGAN can generate high-quality data points with sufficient protection for differential privacy with reasonable privacy budget.

The remaining of the paper is structured as follows: first, we will briefly overview the related literature in Section 2, and then introduce the proposed DPGAN framework and theoretical properties in Section 3. Our framework is evaluated in Section 4 by the end.

2. Related work

In this section, we provide a brief literature review of relevant topics: generative adversarial network, differential privacy and differentially private learning in neural networks.

Generative Adversarial Network. GAN and its variants are developed in recent years with important advances from the theoretical perspective. Instead of clipping the weights, Gulrajani et al. (Gulrajani et al., 2017) improve the training stability and performance of WGAN by penalizing the norm of the critical gradients with respect to its input. Gulrajani et al. (Gulrajani et al., 2017) is aligned with our differential privacy framework due to controlled value of gradient norms.

Zhao et al. (Zhao et al., 2016) introduces energy-based GAN (EBGAN), which views the discriminator as an energy function that attributes low energies to the regions near the data manifold and higher energies to other regions. Similar to the original GANs, a generator is seen as being trained to produce contrastive samples with minimal energies, while the discriminator is trained to assign high energies to these generated samples. The instantiation of EBGAN framework use an auto-encoder architecture, with the energy being the reconstruction error, in place of the usual discriminator. The behavior of EBGAN has shown to be more stable than regular GANs during training. Berthelot et al. (Berthelot et al., 2017)

also use an autoencoder as a discriminator and developed an equilibrium enforcing method, paired with a loss derived from the Wasserstein distance. It improves over WGAN by balancing the power of the discriminator and the generator so as to control the trade- off between image diversity and visual quality. Qi 

(Qi, 2017) proposes a loss-sensitive GAN with Lipschitz assumptions on data distribution and loss function. It improves WGAN by allowing the generator to focus on improving poor data points that are far apart from real examples rather than wasting efforts on those samples that have already been well generated, and thus improving the overall quality of generated samples. Jones et al. (Beaulieu-Jones et al., 2017)

used differentially private version of Auxiliary Classifier GAN (AC-GAN) to simulate participants based on the population of the SPRINT clinical trial. Choi

et al. (Choi et al., 2017) proposed medGAN, which is a generative adversarial framework that can successfully generate EHR. However, the approach may have privacy concerns as we discussed earlier.

Differential Privacy. Differential privacy (DP) (Dwork, 2006) and related algorithms have been widely studied in the literatures. Examples include Dwork et al. (Dwork et al., 2006) for sensitivity-based algorithm, which is among the most popular methods that protect privacy by adding noise to mask the maximum change of data related functions. This work laid the theoretical foundation of many DP studies. Chaudhuri et al. (Chaudhuri and Monteleoni, 2009; Chaudhuri et al., 2011) proposed DP empirical risk minimization. The general idea of our DP framework has the same spirits as the objective perturbation, which is different from adding noise directly on the output parameters. Another related framework that adds noise on gradient is Song et al. (Song et al., 2013)

, which studied DP variants of stochastic gradient descent. In their empirical results, the practice of moderate increasing in the batch size can significantly improve the performance. Song

et al. (Song et al., 2015) followed their early work (Song et al., 2013), and studied as how to use stochastic gradient to learn from models trained by data from multiple sources with DP requirements (hence multiple level of noise). A comprehensive and structured overview of DP data publishing and analysis can be found in (Zhu et al., 2017), where several possible future directions and possible applications are also mentioned.

Differentially Private Learning in Neural Network. The applications of DP in deep learning have been studied recently in several literatures: Abadi et al. (Abadi et al., 2016) studied a gradient clipping method that imposed privacy during the training procedure. Shokri and Shmatikov (Shokri and Shmatikov, 2015) for multi-party privacy preserving neural network with a parallelized and asynchronous training procedure. Papernot et al. (Papernot et al., 2017) combined Laplacian mechanism with machine teaching framework. Phan et al. (Phan et al., 2017b) developed “adaptive Laplace Mechanism” that could be applied in a variety of different deep neural networks while the privacy budget consumption is independent of the number of training step. Phan et al. (Phan et al., 2017a)

developed a private convolutional deep belief network by leveraging the functional mechanism to perturb the energy-based objective functions of traditional CDBNs.

We propose DPGAN to address the challenges appeared in the previous works. In (Papernot et al., 2017) the privacy loss is proportional to the amount of data labeled in that public data set, which may bring about unbearable privacy loss. We solve this problem by training a differentially private generator and can generate infinite number of data points without violating the privacy of training data. Shokri and Shmatikov (Shokri and Shmatikov, 2015) requires the transmission of updated local parameters between server and local task, which is at risk of leakage of private information. Our framework addressed this issue by avoiding a distributed framework. Also, our work is different from (Phan et al., 2017a)

by adding noise within the training procedure instead of adding noise on both energy functions and an extra softmax layer.

3. Methodology

In this section, we elaborate the proposed privacy preserving framework DPGAN. Without loss of generality, we discuss the DPGAN in the context of the WGAN framework (Arjovsky et al., 2017)

while we note that the proposed DPGAN technique can also be easily extended to other GAN frameworks. We firstly introduce differential privacy and then conduct a brief review of GAN and WGAN. We then introduce moments accountant 

(Abadi et al., 2016), which is the key technique in our framework to set a bound to the probability ratio so as to guarantee the privacy in the iterative gradient descent procedure.

3.1. Differential Privacy

The privacy model used in our approach is differential privacy (Dwork, 2011). Denote an algorithm with the differential privacy property by . The algorithm is randomized in order to make it difficult for an observer to re-identify the input data, where an observer is anyone who gets outputs of algorithms using the data. Differential privacy (DP) is defined by (Dwork and Roth, 2013):

Definition 3.1 ().

(Differential Privacy, DP) A randomized algorithm is -differentially private if for any two databases and differing in a single point and for any subset of outputs :

(1)

where and are the outputs of the algorithm for input databases and , respectively, and is the randomness of the noise in the algorithm.

It can be shown that the definition is equivalent to:

with probability for every point in the output range, where reflects the privacy level. A small () means that the difference of algorithm’s output probabilities using and at is small, which indicates high perturbations of ground truth outputs and hence high privacy, and vice versa. The non-private case is given by . measures the violation of the “pure” differential privacy. That is, there exists a small output range associated with probability such that for some fixed point in this area, no matter what the value of is, one can always find a pair of datasets and so that the inequality holds. Typically we are interested in values of so that are less than the inverse of any polynomial in the size of the database.

According to Def. 3.1 and the intuition above, the noise protects the membership of a data point in the dataset. For example, when conducting a clinical experiment, sometimes a person does not want the observer to know that he or she is involved in the experiment. This is due to the fact that observer may link the test result to the appearance/disappearance of certain person and harm the interest of that person. A proper membership protection would ensure that replacing this person with another one will not affect the result too much. This property holds only if the algorithm itself is randomized, i.e. the output is associated with a distribution. And this distribution will not change too much if certain data point is perturbed or even removed. This exactly what the differential privacy tries to achieve.

3.2. GAN and WGAN

Generative adversarial nets (Goodfellow et al., 2014) simultaneously train two models: a generative model that transforms input distribution to output distribution that approximates the data distribution, and a discriminative model

that estimates the probability that a sample came from the training data rather than the output of

. Let be the input noise distribution of and be the real data distribution, GAN aims at training and to play the following two- player minimax game with value function :

(2)

WGAN (Arjovsky et al., 2017) improves GAN by using the Wasserstein distance instead of the Jensen—Shannon divergence. It solves a different two-player minimax game given by:

(3)

where functions are all -Lipschitz (with respect to ) for some . Our approach exploits such -Lipschitz property in WGAN and solves Formula 3 in a differentially private manner.

0:  , learning rate of discriminator. , learning rate of generator. , parameter clip constant. m, batch size. M, total number of training data points in each discriminator iteration. , number of discriminator iterations per generator iteration. , generator iteration. , noise scale. , bound on the gradient of Wasserstein distance with respect to weights
0:  Differentially private generator .
1:  Initialize discriminator parameters , generator parameters .
2:  for  do
3:     for  do
4:         Sample a batch of prior samples.
5:         Sample a batch of real data points.
6:         For each ,
7:         .
8:         
9:         
10:     end for
11:     Sample , another batch of prior samples.
12:     
13:     
14:  end for
15:  return  .
Algorithm 1 Differentially Private Generative Adversarial Nets

3.3. DPGAN framework

Our method focuses on preserving the privacy during the training procedure instead of adding noise on the final parameters directly, which usually suffers from low utility. We add noise on the gradient of the Wasserstein distance with respect to the training data. The parameters of discriminator can be shown to guarantee differential privacy with respect to the sample training points. We note that the privacy of data points that haven’t been sampled for training is guaranteed naturally. This is because replacing these data won’t cause any change in output distribution, which is equivalent to the case of in Definition 3.1. The parameters of generator can also guarantee differential privacy with respect to the training data. This is becuase there is a post-processing property of differential privacy (Dwork and Roth, 2013), which says that any mapping (operation) after a differentially private output will not invade the privacy. Here the mapping is in fact the computation of parameters of generator and the output is the differentially private parameter of discriminator. Since the parameters of generator guarantee differential privacy of data, it is safe to generate data after training procedure. In short, we have: differentially private discriminator + computation of generator differentially private generator. This also means that even if the observer gets generator itself, there is no way for him/her to invade the privacy of training data.

The DPGAN procedure is summarized in Algorithm 1. In line 9, the clipping guarantees that are all -Lipschitz with respect to for some unknown and act in a way to bound the gradient from each data point. The RMSProp in line 8 and line 13 is an optimization algorithm that can adaptively adjust the learning rate according to the magnitude of gradients (Hinton et al., [n. d.]).

3.4. Privacy Guarantees of DPGAN

To show the DPGAN in Algorithm 1 indeed protects the differential privacy, we demonstrate that the parameters of generator (through discriminator parameters ) guarantee differential privacy with respect to the sample training points. Hence any generated data from will not disclose the privacy of training points. Through the moment accountant mechanism, we can compute the final composition result . By treating parameters of discriminator (line 9 in Algorithm 1) as one point in the output space, it is easy to see that the procedure of updating for fixed in any loop is just the algorithm in definition 3.1. Here the input of is real data and noise and the output is the updated . So we have where is an auxiliary input, which in our algorithm refers to the previous parameters . Hence the update of (line 3 to 10 in Algorithm 1) is an instance of adaptive composition. Together with definition 3.1, it is natural to define the following privacy loss at :

Definition 3.2 ().

(Privacy loss)

which describes the difference between two distributions caused by changing data. The privacy loss random variable is given by

, which is defined by evaluating the privacy loss at an outcome sampled from . Note that we assume the supports of 2 distributions associated with and are generally the same so it is safe to evaluate them at same point . This is a critical assumption since if there is an area in support but not in , then evaluating in will result in

and violate the privacy. We define the log of the moment generating function of the privacy loss random variable and moments accountant as:

Definition 3.3 ().

(Log moment generating function)

Definition 3.4 ().

(Moments accountant)

Moments accountant can be seen as the “worst situation” of the moment generating function. The definition of moments accountant enjoys good properties as mentioned in (Abadi et al., 2016) (Theorem 2), where the composability property shows that the overall moments accountant can be easily bounded by the sum of moments accountant in each iteration, which brings about a result that privacy is proportional to iterations. The tail bound can also be applied in the privacy guarantee (Theorem 1 in same paper). We will use this theorem to deduce our own result. Comparing with strong composition theorem (Dwork et al., 2010), moments accountant saves a factor of . According to the definition 3.1, for a large iteration , this is a significant improvement.

In order to use moments accountant we need to be bounded (by clipping the norm in Algorithm 1 in (Abadi et al., 2016)) and add noise according to this bound. We do not clip the norm of , instead we show that by only clipping on can we automatically guarantee a bound of the norm of .

Lemma 3.5 ().

Under the condition of Alg. 1

, assume that the activation function of the discriminator has a bounded range and bounded derivatives everywhere:

and , and every data point satisfies , then for some constant .

Proof.

Without loss of generality, we assume is implemented using a fully connected network. Let be the number of layers except input layer. Let be the -th weight matrix () whose element is the weight connecting -th node in layer to -th node in layer . Let be the diagonal Jacobian of nonlinearities of -th layer. We thus have:

(4)

where is the th row of and is the output of the -th layer. The following fact is well known from the back-propagation algorithm on a fully connected network:

(5)
(6)
(7)

where is the cost function, , and

are the input, output and error vector of layer

, respectively. From 7 we have for :

(8)

Take as an example:

where we assume that . Here is the number of nodes in the th layer. And thus we have:

(9)

Because of the assumption that , we have . Combining it with 3.4, we have and therefore we have:

where the boundness of comes from the choice of sigmoid activation in the last layer of generator. Note that when computing , we need to take into consideration the dropout rate, weight sparsity, connection percentage of convolutional nets, and other factors. ∎

Remark 1 ().

Note that activation functions like ReLU (and its variants) and Softplus have unbounded . This will not affect our result because both the data points and weights are bounded, which guarantees that the output of each node in each layer is bounded. The boundness of data comes from a common fact that each data element has a bounded range.

We have the following lemma which guarantees DP for discriminator training procedure.

Lemma 1 ().

Given the sampling probability , the number of discriminator iterations in each inner loop and privacy violation , for any positive , the parameters of discriminator guarantee -differential privacy with respect to all the data points used in that outer loop (fix ) if we choose:

(10)
Proof.

The DP guarantee for the discriminator training procedure follows from the intermediate result (Abadi et al., 2016) (Theorem 1). We need to find an explicit relation between and

, i.e., how much noise standard deviation

we need to impose on the gradient so that we can guarantee a privacy level , with small violation . Combine inequality and inequality in Theorem 1, we can get the result by letting the equality hold. ∎

Lemma 1 quantifies the relation between noise level and privacy level . It shows that for fixed perturbation on gradient, larger leads to less privacy guarantee (larger ). This is indeed true since when more data are involved in computing discriminator , less privacy is assigned on each of them. Also, more iterations () leads to less privacy because the observer gives more information (specifically, more accurate gradient) for data. This requires us to choose the parameters carefully in order to have a reasonable privacy level. Finally we have the following theorem as the privacy guarantee of the parameters of the generator:

Theorem 1 ().

The output of generator learned in Algorithm 1 guarantees -differential privacy.

The privacy guarantee a direct consequence from Lemma 1 followed by the post-processing property of differential privacy (Dwork and Roth, 2013).

4. Experiment

(a) = (b) (c) (d)
Figure 1. Generated images with four different on MNIST dataset are plotted in leftmost column in each group. Three nearest neighbors of generated images are plotted to illustrate the generated data is not memorizing the real data and the privacy is preserved. We can see that the images get more blurred as more noise is added.

In this section, we will present extensive experiments to investigate how the noise will affect the effectiveness of generative network on two benchmark datasets (MNIST and MIMIC-III)111Code and experiment scripts are available at: https://github.com/illidanlab/dpgan. There are several notable findings that are worth highlighting. The Wasserstein distance converges as the training procedure goes on and exhibits fluctuation in the late stage in the case of privacy. This fluctuation correlates well with the quality of generated data and reflects the privacy level. In addition, our framework can be generalized under various network structures and applied on many benchmark datasets.

4.1. Relationship between Privacy Level and Generation Performance

We conduct experiments on MNIST dataset to illustrate the relationship between the privacy level and the quality of output images from the generator.

In this experiment, we set both the learning rate of discriminator and generator to be . The parameter clip constant is such that the weights of discriminator will be clipped back to . We use MNIST’s training data with data size and the batch size is set to be 64. Hence the sample probability is . The noise scale is , and the number of iterations on discriminator () and generator () are and , respectively. Since we use leaky ReLU as the activation function on discriminator network and ReLU on generative network, we have , where is the bound on the derivative of the activation function. Dimension of is 100 and every coordinate is within . We adopt similar network structure of DCGAN (Radford et al., 2015) with noise generation and inference parts to protect data privacy, of which the effectiveness has been verified in (Arjovsky et al., 2017)

. To impose a certain level of noise on the network, we choose Gaussian noise with zero mean (hence no bias) and multiple values of standard deviation. Gaussian distribution is widely used in privacy-preserving algorithm (see Gaussian mechanism and its variants in 

(Dwork and Roth, 2013)) and usually results in ()-differential privacy. We add -regularization on the weights of generator and discriminator, which has little impact on our bound in Lemma 3.5.

In the first experiment we investigate how the change in noise level affects the image quality. Four groups of the generated images are plotted and shown in Figure 1, corresponding to 4 different values. In each group, the leftmost column shows the generated images for a certain value. The rest three columns are the corresponding nearest neighbor images from the training set, which demonstrates that the distortions of images are caused by noise instead of bad training images. The distance between training images and generated images is Euclidean norm. Comparing the generated images with their nearest neighbors, it is clear to see that our model is not simply to memorize the training data but to be capable of generating photographic samples with unique details. As mentioned in (Goodfellow et al., 2014), these images indeed come from actual samples of the model distributions, rather than the conditional means given samples of hidden units. Most importantly, the generated images of each group in Figure 1

shows that, the larger the variance of noise is, the blurrier the generated images would be, when all other conditions are the same. In the sense of differential privacy, any observer who gets the generated images can hardly know whether a data point is involved in the training procedure or not, as elaborated in Theorem 

1 and illustrated by the generated images in Figure 1. The observer has no way to reconstruct the training images in such case and hence the privacy of data is protected. This demonstrates that our model successfully addresses the privacy issue mentioned previously. The noise level () is recommended to be tuned in a large range to guarantee good quality of generated images. In addition, it can be seen from the results that our method does not suffer from mode collapse or gradient vanishing, which is an advantage that is inherited from the WGAN network structure.

(a) = (b) (c) (d)
Figure 2. Wasserstein distance for different privacy levels when applying DPGAN on MINST. We can see that the curves converge and exhibit more fluctuations as more noise is added.
(a) Digit 0 and 1 (b) Digit 2 and 3 (c) Digit 4 and 5
Figure 3.

Binary classification task on MNIST database with different training strategies. From left to right we use training data, generated data without noise, generated data with

. We can see that as less noise is added, the accuracy of classifier build on generated data gets higher, which indicates that the generated data has better quality.

4.2. Relationship between Privacy Level and the Convergence of Network.

In the second experiment, we plot the Wasserstein distance for every 100 generator iterations. The result shows that the Wasserstein distance decreases during training and converges in the end, which also correlates well with the visual quality of the generated samples (Arjovsky et al., 2017). The corresponding results are shown in Figure 2. As expected, the Wasserstein distance decreases as the training procedure goes on and converges, which is the result of joint effect of discriminator and generator.

Despite the fluctuation caused by the min-max training itself, we can also observe that, a smaller (hence larger noise) leads to more frequent fluctuation and larger variance, which is especially clear in the latter half of the curves. This conforms to the common intuition that more noise will results in a more blurry image, which is also consistent with the results of the previous experiment. One interesting phenomena is that the peaks often appear after the convergence of Wasserstein distance. More evidences show that this might be caused by clipping the weight. The reason is that clipping weights is equivalent to adjusting the gradient in directions whose the corresponding gradient magnitude is too large (). Different from gradient descent step (even with noise) which always changes the weight towards the optimal solution, the effect of such adjustment is hard to predict and hence might cause instability. This is especially clear when network converges. However, these peaks can be quickly eliminated during the training procedure and the network may maintain a numerical stability. This is due to the fact that the generator is in convergence stage, which is one of the advantages of adversarial networks. Hence our system does not suffer from divergence problem. Again, this experiment demonstrates the most important property of a learning system with differential privacy consideration: there exists a trade-off between learning performance and privacy level.

4.3. Classification on MNIST Data

In this section we conduct a binary classification task to further evaluate the quality of the generated MNIST data. Here we use the same settings as in subsection 1 4.1. Take a pair of digits 0 and 1 as an example, we generate 0s and 1s from their own training samples (use all samples) separately, with different values. For each digit, we generate equal number of data as training samples. Then for fixed (and for training set), we randomly select 4000 samples from generated data (contains 2000 for both 0 an 1), build classifiers on them and test on MNIST’s testing set. Then we repeat this for 100 times and show the accuracy (Figure 3) on testing set with classifiers built from training data and generated ones with different standard deviations. Finally we run the same procedures for digit pairs 23 and 45, as well.

The results are shown in Figure 3. Despite the fact that smaller noise makes the accuracy higher (better generated quality), the variance of plot also decreases generally. The generate quality is little affected below some threshold (for example, somewhere between 3.0 and 11.0 for digit 01). Thus it is recommended to choose an larger than that threshold (add less noise) so that the generated data will not be affected much. Note that a threshold between 3.0 and 11.0 is quite promising privacy level. Comparing among three figures, digit pairs 01 performs better than the rest two, which is due to the reason that the shapes of digit 0 and 1 make them easy to be separated. This experiment use classification task to demonstrate the trade-off between learning performance and privacy level.

(a) = (b) = (c) = (d) =
Figure 4. DWP evaluation on MIMIC-III database with different values (1070 points). We can see that as more noise is added, the distribution of generated data in each dimension becomes more deviated from the real training data.
(a) = (b) = (c) = (d) =
Figure 5. Dimension-wise prediction evaluation on MIMIC-III database with different values. We can see that as more noise is added, AUC value of classifier build from generated data gets lower and the data gets sparser.

4.4. Electronic Health Records

In this section we apply DPGAN to generate Electronic Health Records (EHR) while the privacy of patients is needed to be protected. EHR is one of the most important information sources from which we can learn the genetics and biological characteristics of certain population. However, the access to EHR requires administrative permission in consideration of the privacy protection, which is very inconvenient to the research community. Choi et al. proposed medGAN (Choi et al., 2017), which can successfully generate EHR based on MIMIC-III critical care datasets (Johnson et al., 2016; Goldberger et al., 2000), while the sensitive information is not guaranteed to be protected. MIMIC-III is a well-known public EHR database consisting of the medical records of 46,520 intensive care unit (ICU) patients over 11 years old. In our experiments we use the extracted ICD9 codes222

International Statistical Classification of Diseases and Related Health Problems, 9th edition

only, and group them using their first 3 digits. For each patient (1 out of 46520) in each admission to one hospital, we record what kind of diseases this patient has and make it into a hot vector. For example, patient A has been diagnosed with 3 diseases (with ICD9 codes 9, 42 and 146, respectively) in one admission and we use a vector to represent the patient A’s visit, where the vector has digit 1 in position 9, 42 and 146, and has digit 0 in the rest positions. Then we add up all vectors (different admissions and different hospitals) of a certain patient and hence each patient has one and only one vector with

. We then binarize the data, where all non-zero elements are transferred to 1. These vectors serve as summary of historical record of each patient’s health condition and can be considered as a feature for patients. Together we can also extract useful information from these vectors. Notice that we remove the patient data with missing values before feeding them into network.

Similar to previous experiments, we set the learning rates of both the discriminator and generator to be . The parameter clip constant is and is equal to 2. Also we have , for MIMIC-III dataset and . The is set as . We adopt the same network structure as in (Choi et al., 2017). After generating the data, we set a threshold at 0.5 to convert the generated data matrix from continuous domain to binary domain. Since the quality of EHR cannot be observed as images directly, we adopt the dimensional wise probability (DWP) (Choi et al., 2017) as a quantitative measurement for the quality of the generated data, which is to check whether the model has learned each dimension’s distribution correctly. Through DWP we study how the performance of DPGAN varies with the changing of noise level.

The results are shown in Figure 4 for different noise magnitudes. Each point in the figure is a pair of float numbers that represents Bernoulli success probability of real data (x-axis), and generated data (y-axis) of one dimension (corresponding to one disease). The Bernoulli success probability (of each dimension) is the sample mean of that dimension (Maximum likelihood estimation of independent of Bernoulli trials), which is a portion of 1 in that column. This characterizes the rareness of that disease and hence together reflects the distribution of diseases among population, which is a very important statistical characteristic and can be frequently queried. Hence there is a must to protect the people who provide this distribution by adding noise. Despite the theoretical result in 1, we can understand the privacy protection in a intuitive way: on one hand, if no noise added (Figure 4 (a)), changing database by adding one person may change the frequency of certain disease in some extent. This change is especially significant when the number of people in database is small or a group of people is changed (See ”group privacy” in (Dwork et al., 2014)). By looking at this change, an observer may make some conclusions and harm the interest of anyone who involves in the database. For example, adding a group of people may enlarge the frequency of certain disease, if this disease is highly related with the quality of life or it is some rare disease, health insurance company may raise people’s premiums. On the other hand, if there is noise added (Figure 4 (b) to (d)), observer is not sure what is the effect by adding this person (or these people) because the output is uncertain (associated with a noise distribution) and the generated data will hardly leak any patient’s privacy information. This uncertainty gets larger when more noise is added, which can be seen from Figure 4. On the whole, it can be seen from this experiment that our model indeed provides protection in the sense of differential privacy on the medical data, and solves the problem we mentioned in abstract.

Note that the rareness of diseases are also well protected due to the perturbation of noise. Assuming that there is a public-available generated EHR data that are generated based on the EHR of a certain population, the insurance company may raise the insurance premium for those who get rare diseases, based on the statistical information inferred from generated EHR data. Since DPGAN may change the rareness of diseases, the issuarance company cannot get this type of information accurately from our generated data, thus the interest of this group of people is guaranteed.

The results also indicate how well the generative model captures training data’s distribution. In Figure 4 (a), most of the points are concentrated around line , which indicates that our model captures each dimension’s distribution correctly. It can also be seen from Figure 4 (left to right) that a large variance of noise makes more points deviated from line . This means that for one disease, the rareness of generated data becomes more different from real data, which also indicates the quality of generated data is degraded. This phenomenon matches our intuition that applying a higher level of noise often leads to a worse distribution approximation, which is also consistent with evidence in Figure 2 (a) in (Choi et al., 2017).

4.5. Classification on EHR Data

Continue with previous sub-section, we use dimension-wise prediction (DWpre) (Choi et al., 2017)

to evaluate how well the generative model recovers the relationship among the dimensions of the data. The basic idea of DWpre is to select the same column from training set and generated set as target and set the rest columns as feature. Then we build logistic regression classifiers on both of them and test on testing set. One assumption here is that a closer performance of two classifiers indicates better quality of the generated set. Due to the highly unbalanced testing data (0 is dominated), we use AUC as the measurement here.

The results are shown in Figure 5. Despite the fact that in most cases, classifiers trained from real data perform better than classifiers trained from generated data, the AUC values of generated data decrease as the decreasing of the (more noise added). This is due to the reason that noise perturbs the training of discriminator and affects the generator indirectly, which leads to the deviation of output distribution from the real one and can results in poor testing performance. It can also be seen that there is not much decreasing in the performance, which is one of the advantages of our model. The points get sparser as more noise is added, which reflects another impact of noise on data. This is due to the reason that we use logistic regression to perform binary classification, which does not allow uni-label column. The sparse column are widely exists in original data and it is harder for the generative model to capture the sparsity of certain column of original data if there is more perturbation. More columns are learned as all-zero and discarded when selected as target in classification task. In summary, higher privacy results in less ability for generative model to capture the inter- dimensional relationship. Also our framework successfully addresses the issue in differential privacy system that adding noise will cause too much decreasing in system performance.

5. Conclusion

In this paper, we proposed a privacy preserving generative adversarial network (DPGAN) that preserves privacy of the training data in a differentially private sense. Our algorithm is proved rigorously to guarantee the (

)-differential privacy. We conducted two experiments to show that our algorithm can generate data points with good quality and converges under the condition of both noisy and limitation of training data, with meaningful learning curves useful for tunning hyperparameters. For future work we will consider reducing the privacy budget by trying different ways of clipping, and also tighten the utility bound.

Acknowledgements.
This research is supported in part by National Science Foundation under Grant IIS-1565596 (JZ), IIS-1615597 (JZ), IIS-1650723 (FW) and IIS-1716432 (FW). and the Office of Naval Research under grant number N00014-14-1-0631 (JZ) and N00014-17-1-2265 (JZ).

References