1 Introduction
The rise of data analytics and machine learning (ML) presents countless opportunities for companies, governments and individuals to benefit from the accumulated data. At the same time, their ability to capture fine levels of detail potentially compromises privacy of data providers. Recent research
[Fredrikson et al.2015, Shokri et al.2017, Hitaj et al.2017] suggests that even in a blackbox setting it is possible to argue about the presence of individual examples in the training set or recover certain features of these examples.Among methods that tackle privacy issues of machine learning is the recent concept of federated learning (FL) [McMahan et al.2016]. In the FL setting, a central entity (server) wants to train a model on user data without actually copying these data from user devices. Instead, users (clients) update models locally, and the server aggregates these models. One popular approach is the federated averaging, FedAvg [McMahan et al.2016], where clients do local ondevice gradient descent using their data, then send these updates to the server where they get averaged. Privacy can further be enhanced by using secure multiparty computation (MPC) [Yao1982] to allow the server access only average updates of a big group of users and not individual ones.
Despite many advantages, federated learning does have a number of challenges. First, the result of FL is a single trained model (therefore, we will refer to it as a model release method), which does not provide much flexibility in the future. For instance, it would significantly reduce possibilities for further aggregation from different sources, e.g. different hospitals trying to combine federated models trained on their patients data. Second, this solution requires data to be labelled at the source, which is not always possible, because user may be unqualified to label their data or unwilling to do so. A good example is again a medical application where users are unqualified to diagnose themselves but at the same time would want to keep their condition private. Third, it does not provide provable privacy guarantees, and there is no reason to believe that the aforementioned attacks do not work against it. Some papers propose to augment FL with differential privacy (DP) to alleviate this issue [McMahan et al.2017, Geyer et al.2017]. While these approaches perform well in ML tasks and provide theoretical privacy guarantees, they are often restrictive (e.g. many DP methods for ML assume, implicitly or explicitly, access to public data of similar nature or abundant amounts of data, which is not always realistic).
In our work, we address these problems by proposing to combine the strengths of federated learning and recent advancements in generative models to perform privacypreserving data release, which has many immediate advantages. First, the released data could be used to train any ML model (we refer to it as the downstream task or the downstream model) without additional assumptions. Second, data from different sources could be easily pooled, providing possibilities for hierarchical aggregation and building stronger models. Third, labelling and verification can be done later down the pipeline, relieving some trust and expertise requirements on users. Fourth, released data could be traded on data markets^{1}^{1}1https://www.datamakespossible.com/valueofdata2018/dawnofdatamarketplace, where anonymisation and protection of sensitive information is one of the biggest obstacles. Finally, data publishing would facilitate transparency and reproducibility of research studies.
The main idea of our approach, named FedGP, for federated generative privacy, is to train generative adversarial networks (GANs) [Goodfellow et al.2014] on clients to produce artificial data that can replace clients real data. Since some clients may have insufficient data to train a GAN locally, we instead train a federated GAN model. First of all, user data still remain on their devices. Second, the federated GAN will produce samples from the common crossuser distribution and not from a specific single user, which adds to overall privacy. Third, it allows releasing entire datasets, thereby possessing all the benefits of private data release as opposed to model release. Figure 1 depicts the schematics of our approach for two clients.
To estimate potential privacy risks, we use our
post hoc privacy analysis framework [Triastcyn and Faltings2019] designed specifically for private data release using GANs.Our contributions in this paper are the following:

on the one hand, we extend our approach for private data release to the federated setting, broadening its applicability and enhancing privacy;

on the other hand, we modify the federated learning protocol to allow a range of benefits mentioned above;

we demonstrate that downstream models trained on artificial data achieve high learning performance while maintaining good averagecase privacy and being resilient to model inversion attacks.
2 Related Work
In recent years, as machine learning applications become a commonplace, a body of work on security of these methods grows at a rapid pace. Several important vulnerabilities and corresponding attacks on ML models have been discovered, raising the need of devising suitable defences. Among the attacks that compromise privacy of training data, model inversion [Fredrikson et al.2015] and membership inference [Shokri et al.2017] received high attention.
Model inversion [Fredrikson et al.2015]
is based on observing the output probabilities of the target model for a given class and performing gradient descent on an input reconstruction. Membership inference
[Shokri et al.2017] assumes an attacker with access to similar data, which is used to train a ”shadow” model, mimicking the target, and an attack model. The latter predicts if a certain example has already been seen during training based on its output probabilities. Note that both attacks can be performed in a blackbox setting, without access to the model internal parameters.To protect privacy while still benefiting from the use of statistics and ML, many techniques have been developed over the years, including anonymity [Sweeney2002], diversity [Machanavajjhala et al.2007], closeness [Li et al.2007], and differential privacy (DP) [Dwork2006].
Most of the MLspecific literature in the area concentrates on the task of privacypreserving model release. One take on the problem is to distribute training and use disjoint datasets. For example, [Shokri and Shmatikov2015] propose to train a model in a distributed manner by communicating sanitised updates from participants to a central authority. Such a method, however, yields high privacy losses [Abadi et al.2016, Papernot et al.2016]. An alternative technique suggested by [Papernot et al.2016], also uses disjoint training sets and builds an ensemble of independently trained teacher models to transfer knowledge to a student model by labelling public data. This result has been extended in [Papernot et al.2018] to achieve stateoftheart image classification results in a private setting (with singledigit DP bounds). A different approach is taken by [Abadi et al.2016]
. They suggest using differentially private stochastic gradient descent (DPSGD) to train deep learning models in a private manner. This approach achieves high accuracy while maintaining low DP bounds, but may also require pretraining on public data.
A more recent line of research focuses on private data release and providing privacy via generating synthetic data [Bindschaedler et al.2017, Huang et al.2017, BeaulieuJones et al.2017]. In this scenario, DP is hard to guarantee, and thus, such models either relax the DP requirements or remain limited to simple data. In [Bindschaedler et al.2017], authors use a graphical probabilistic model to learn an underlying data distribution and transform real data points (seeds) into synthetic data points, which are then filtered by a privacy test based on a plausible deniability criterion. This procedure would be rather expensive for complex data, such as images. fioretto2019privacy fioretto2019privacy employ dicision trees for a hybrid model/data release solution and guarantee stronger differential privacy, but like the previous approach, it would be difficult to adapt to more complex data. Alternatively, huang2017context huang2017context introduce the notion of generative adversarial privacy and use GANs to obfuscate real data points w.r.t. predefined private attributes, enabling privacy for more realistic datasets. Finally, a natural approach to try is training GANs using DPSGD [BeaulieuJones et al.2017, Xie et al.2018, Zhang et al.2018]. However, it proved extremely difficult to stabilise training with the necessary amount of noise, which scales as w.r.t. the number of model parameters . It makes these methods inapplicable to more complex datasets without resorting to unrealistic (at least for some areas) assumptions, like access to public data from the same distribution.
On the other end of spectrum, mcmahan2016communication mcmahan2016communication proposed federated learning as one possible solution to privacy issues (among other problems, such as scalability and communication costs). In this setting, privacy is enforced by keeping data on user devices and only submitting model updates to the server. It can be augmented by MPC [Bonawitz et al.2017] to prevent the server from accessing individual updates and by DP [McMahan et al.2017, Geyer et al.2017] to provide rigorous theoretical guarantees.
3 Preliminaries
This section provides necessary definitions and background. Let us commence with approximate differential privacy.
Definition 1.
A randomised function (mechanism) with domain and range satisfies differential privacy if for any two adjacent inputs and for any outcome the following holds:
(1) 
Definition 2.
Privacy loss of a randomised mechanism for inputs and outcome takes the following form:
(2) 
Definition 3.
The Gaussian noise mechanism achieving DP, for a function , is defined as
(3) 
where and is the L2sensitivity of .
For more details on differential privacy and the Gaussian mechanism, we refer the reader to [Dwork and Roth2014].
In our privacy estimation framework, we also use some classical notions from probability and information theory.
Definition 4.
The Kullback–Leibler (KL) divergence between two continuous probability distributions
and with corresponding densities , is given by:(4) 
Note that KL divergence between the distributions of and
is nothing but the expectation of the privacy loss random variable
.Finally, we use the Bayesian perspective on estimating mean from the data to get sharper bounds on expected privacy loss compared to the original work [Triastcyn and Faltings2019]. More specifically, we use the following proposition.
Proposition 1.
Let
be a random vector drawn from the distribution
with the same mean and variance, and let
andbe the sample mean and the sample standard deviation of the random variable
. Then,(5) 
where is the inverse CDF of the Student’s distribution with degrees of freedom at .
The proof of this proposition can be obtained by using the maximum entropy principle with a flat (uninformative) prior to get the marginal distribution of the sample mean , and observing that the random variable follows the Student’s distribution with degrees of freedom [Oliphant2006].
4 Federated Generative Privacy
In this section, we describe our algorithm, what privacy it can provide and how to evaluate it, and discuss current limitations.
4.1 Method Description
In order to keep participants data private while still maintaining flexibility in downstream tasks, our algorithm produces a federated generative model. This model can output artificial data, not belonging to any real user in particular, but coming from the common crossuser data distribution.
Let be a set of clients holding private datasets . Before starting the training protocol, the server is providing each client with generator and critic models, and clients initialise their models randomly. Like in a normal FL setting, the training process afterwords consists of communication rounds. In each round , clients update their respective models performing one or more passes through their data and submit generator updates to the server through MPC while keeping private. In the beginning of the next round, the server provides an updated common generator to all clients.
This approach has a number of important advantages:

Data do not physically leave user devices.

Only generators (that do not come directly into contact with data) are shared, and critics remain private.

Using artificial data in downstream tasks adds another layer of protection and limits the information leakage to artificial samples. This is esprecially useful given that ML models can be attacked to extract training data [Fredrikson et al.2015], sometimes even when protected by DP [Hitaj et al.2017].
What remains to assess is how much information would an attacker gain about original data. We do so by employing a notion introduced in an earlier work [Triastcyn and Faltings2019] that we name Differential AverageCase Privacy.
It is important to clarify why we do not use the standard DP to provide stronger theoretical guarantees: we found it extremely difficult to train GANs with the amount of noise required for meaningful DP guarantees. Despite a number of attempts [BeaulieuJones et al.2017, Xie et al.2018, Zhang et al.2018], we are not aware of any technically sound solution that would generalise beyond very simple datasets.
4.2 Differential AverageCase Privacy
Our framework builds upon ideas of empirical DP (EDP) [Abowd et al.2013, Schneider and Abowd2015] and onaverage KL privacy [Wang et al.2016]. The first can be viewed as a measure of sensitivity on posterior distributions of outcomes [Charest and Hou2017] (in our case, generated data distributions), while the second relaxes DP notion to the case of an average user.
More specifically, we say the mechanism is DAP if for two neighbouring datasets , where data come from an observed distribution, it holds that
(6) 
For the sake of example, let each data point in represent a single user. Then, DAP could be interpreted as follows: with probability , a typical user submitting their data will change outcome probabilities of the private algorithm on average by ^{2}^{2}2Because .
4.3 Generative Differential AverageCase Privacy
In the case of generative models, and in particular GANs, we don’t have access to exact posterior distributions, a straightforward EDP procedure in our scenario would be the following: (1) train GAN on the original dataset ; (2) remove a random sample from ; (3) retrain GAN on the updated set; (4) estimate probabilities of all outcomes and the maximum privacy loss value; (5) repeat (1)–(4) sufficiently many times to approximate , .
If the generative model is simple, this procedure can be used without modification. Otherwise, for models like GANs, it becomes prohibitively expensive due to repetitive retraining (steps (1)–(3)). Another obstacle is estimating the maximum privacy loss value (step (4)). To overcome these two issues, we propose the following.
First, to avoid retraining, we imitate the removal of examples directly on the generated set . We define a similarity metric between two data points and that reflects important characteristics of data (see Section 5 for details). For every randomly selected real example , we remove nearest artificial neighbours to simulate absence of this example in the training set and obtain . Our intuition behind this operation is the following. Removing a real example would result in a lower probability density in the corresponding region of space. If this change is picked up by a GAN, which we assume is properly trained (e.g. there is no mode collapse), the density of this region in the generated examples space should also decrease. The number of neighbours is defined by the ratio of artificial and real examples, to keep density normalised.
Second, we relax the worstcase privacy loss bound in step (4) by the expectedcase bound, in the same manner as onaverage KL privacy. This relaxation allows us to use a highdimensional KL divergence estimator [PérezCruz2008] to obtain the expected privacy loss for every pair of adjacent datasets ( and
). There are two major advantages of this estimator: it converges almost surely to the true value of KL divergence; and it does not require intermediate density estimates to converge to the true probability measures. Also since this estimator uses nearest neighbours to approximate KL divergence, our heuristic described above is naturally linked to the estimation method.
Finally, having obtained sufficiently many sample pairs , we use Proposition 1 to determine DAP parameters and . This is an improvement over original DAP, because this way we can get much sharper bounds on expected privacy loss.
4.4 Limitations
Our approach has a number of limitations that should be taken into consideration.
First of all, existing limitations of GANs (or generative models in general), such as training instability or mode collapse, will apply to this method. Hence, at the current state of the field, our approach may be difficult to adapt to inputs other than image data. Yet, there is still a number of privacysensitive applications, e.g. medical imaging or facial analysis, that could benefit from our technique. And as generative methods progress, new uses will be possible.
Second, since critics remain private and do not leave user devices their performance can be hampered by a small number of training examples. Nevertheless, we observe that even in the setting where some users have smaller datasets overall discriminative ability of all critics is sufficient to train good generators.
Lastly, our empirical privacy guarantee is not as strong as the traditional DP and has certain limitations [Charest and Hou2017]. However, due to the lack of DPachieving training methods for GANs it is still beneficial to have an idea about expected privacy loss rather than not having any guarantee.
5 Evaluation
Setting  Dataset  Baseline  CentGP  FedGP 

i.i.d.  MNIST (500)  
MNIST (1000)  
MNIST (2000)  
non i.i.d.  MNIST (500)  
MNIST (1000)  —  
MNIST (2000) 
In this section, we describe the experimental setup and implementation, and evaluate our method on MNIST [LeCun et al.1998] and CelebA [Liu et al.2015] datasets.
5.1 Experimental Setting
We evaluate two major aspects of our method. First, we show that training ML models on data created by the common generator achieves high accuracy on MNIST (Section 5.2). Second, we estimate expected privacy loss of the federated GAN and evaluate the effectiveness of artificial data against model inversion attacks on CelebA face attributes (Section 5.3).
Learning performance experiments are set up as follows:

Train the federated generative model (teacher) on the original data distributed across a number of users.

Generate an artificial dataset by the obtained model and use it to train ML models (students).

Evaluate students on a heldout test set.
We choose two commonly used image datasets, MNIST and CelebA. MNIST is a handwritten digit recognition dataset consisting of 60000 training examples and 10000 test examples, each example is a 28x28 size greyscale image. CelebA is a facial attributes dataset with 202599 images, each of which we crop to 128x128 and then downscale to 48x48.
In our experiments, we use Python and Pytorch framework.
^{3}^{3}3http://pytorch.org For implementation details of GANs and privacy evaluation, please refer to [Triastcyn and Faltings2019]. To train the federated generator we use FedAvg algorithm [McMahan et al.2016]. As a sim function introduced in Section 4.3 we use the distance between InceptionV3 [Szegedy et al.2016] feature vectors.Setting  Dataset  

i.i.d.  MNIST (500)  
MNIST (1000)  
MNIST (2000)  
CelebA  
noni.i.d.  MNIST (500)  
MNIST (1000)  
MNIST (2000) 
5.2 Learning Performance
First, we evaluate the generalisation ability of the student model trained on artificial data. More specifically, we train a student model on generated data and report test classification accuracy on a heldout real set. We compare learning performance with the baseline centralised model trained on original data, as well as the same model trained on artificial samples obtained from the centrally trained GAN (CentGP).
Since critics stay private and can only access data of a single user, the size of each individual dataset has significant effect. Therefore, in our experiment we vary sizes of user datasets and observe its influence on training. In each experiment, we specify an average number of points per user, while the actual number is drawn from the uniform distribution with this mean, with some clients getting as few as 100 data points.
We also study two settings: i.i.d. and noni.i.d data. In the first setting, distribution of classes for each client is identical to the overall distribution. In the second, every client gets samples of 2 random classes, imitating the situation when a single user observes only a part of overall data distribution.
Details of the experiment can be found in Table 1. We observe that training on artificial data from the federated GAN allows to achieve accuracy on MNIST with the baseline of . We can also see how accuracy grows with the average user dataset size. A less expected observation is that noni.i.d. setting is actually beneficial for FedGP. A possible reason is that training critics with little data becomes easier when this data is less diverse (i.e. the number of different classes is smaller). Comparing to the centralised generative privacy model CentGP, we can also see that FedGP is more affected by sharding of data on user devices than by overall data size, suggesting that further research in training federated generative models is necessary.
5.3 Privacy Analysis
Using the privacy estimation framework (see Sections 4.2 and 4.3), we fix the probability of exceeding the expected privacy loss bound in all experiments to and compute the corresponding for each dataset and two settings. Table 2 summarises the bounds we obtain. As anticipated, the privacy guarantee improves with the growing number of data points, because the influence of each individual example diminishes. Moreover, the average privacy loss , expectedly, is significantly smaller than the typical worstcase DP loss in similar settings. To put it in perspective, the average change in outcome probabilities estimated by DAP is even in more difficult settings, while the stateoftheart DP method would place the worstcase change at hundreds or even thousands percent without giving much information about a typical case.
Baseline  FedGP  

Detection  
Recognition 
On top of estimating expected privacy loss bounds, we test FedGP’s resistance to the model inversion attack [Fredrikson et al.2015]. More specifically, we run the attack on two student models: trained on original data samples and on artificial samples correspondingly. Note that we also experimented with another wellknown attack on machine learning models, the membership inference [Shokri et al.2017]. However, we did not include it in the final evaluation, because of the poor attacker’s performance in our setting (nearly random guess accuracy for given datasets and models even on the nonprivate baseline). Moreover, we only consider passive adversaries and we leave evaluation with active adversaries, e.g. [Hitaj et al.2017], for future work.
In order to run the attack, we train a student model (a simple multilayer perceptron with two hidden layers of 1000 and 300 neurons) in two settings: the real data and the artificial data generated by the federated GAN. As facial recognition is a more privacysensitive application, and provides a better visualisation of the attack, we pick the CelebA attribute prediction task to run this experiment.
We analyse real and reconstructed image pairs using OpenFace [Amos et al.2016] (see Table 3). It confirms our theory that artificial samples would shield real data in case of the downstream model attack. In the images reconstructed from a nonprivate model, faces were detected of the time and recognised in of cases. For our method, detection succeeded only in of faces and the recognition rate was , well within the stateoftheart error margin for face recognition.
Figure 2 shows results of the model inversion attack. The top row presents the real target images. The following rows depict reconstructed images from the nonprivate model and the model trained on the federated GAN samples. One can observe a clear information loss in reconstructed images going from the nonprivate to the FedGPtrained model. Despite failing to conceal general shapes in training images (i.e. faces), our method seems to achieve a tradeoff, hiding most of the specific features, while the nonprivate model reveals important facial features, such as skin and hair colour, expression, etc. The obtained reconstructions are either very noisy or converge to some average featureless faces.
6 Conclusions
We study the intersection of federated learning and private data release using GANs. Combined these methods enable important advantages and applications for both fields, such as higher flexibility, reduced trust and expertise requirements on users, hierarchical data pooling, and data trading.
The choice of GANs as a generative model ensures scalability and makes the technique suitable for realworld data with complex structure. In our experiments, we show that student models trained on artificial data can achieve high accuracy on classification tasks. Moreover, models can also be validated on artificial data. Importantly, unlike many prior approaches, our method does not assume access to similar publicly available data.
We estimate and bound the expected privacy loss of an average client by using differential averagecase privacy thus enhancing privacy of traditional federated learning. We find that, in most scenarios, the presence or absence of a single data point would not change the outcome probabilities by more than on average. Additionally, we evaluate the provided protection by running the model inversion attack and showing that training with the federated GAN reduces information leakage (e.g. face detection in recovered images drops from to ).
Considering the importance of the privacy research, the lack of good solutions for private data publishing, and the rising popularity of federated learning, there is a lot of potential for future work. In particular, a major direction of advancing current research would be achieving differential privacy guarantees for generative models while still preserving high utility of generated data. A step in another direction would be to improve our empirical privacy concept, e.g. by bounding maximum privacy loss rather than average, or finding a more principled way of sampling from outcome distributions.
References
 [Abadi et al.2016] Martín Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pages 308–318. ACM, 2016.

[Abowd et al.2013]
John M Abowd, Matthew J Schneider, and Lars Vilhuber.
Differential privacy applications to bayesian and linear mixed model estimation.
Journal of Privacy and Confidentiality, 5(1):4, 2013.  [Amos et al.2016] Brandon Amos, Bartosz Ludwiczuk, Mahadev Satyanarayanan, et al. Openface: A generalpurpose face recognition library with mobile applications. 2016.

[BeaulieuJones et al.2017]
Brett K BeaulieuJones, Zhiwei Steven Wu, Chris Williams, and Casey S Greene.
Privacypreserving generative deep neural networks support clinical data sharing.
bioRxiv, page 159756, 2017.  [Bindschaedler et al.2017] Vincent Bindschaedler, Reza Shokri, and Carl A Gunter. Plausible deniability for privacypreserving data synthesis. Proceedings of the VLDB Endowment, 10(5), 2017.
 [Bonawitz et al.2017] Keith Bonawitz, Vladimir Ivanov, Ben Kreuter, Antonio Marcedone, H Brendan McMahan, Sarvar Patel, Daniel Ramage, Aaron Segal, and Karn Seth. Practical secure aggregation for privacypreserving machine learning. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pages 1175–1191. ACM, 2017.
 [Charest and Hou2017] AnneSophie Charest and Yiwei Hou. On the meaning and limits of empirical differential privacy. Journal of Privacy and Confidentiality, 7(3):3, 2017.
 [Dwork and Roth2014] Cynthia Dwork and Aaron Roth. The algorithmic foundations of differential privacy. Foundations and Trends® in Theoretical Computer Science, 9(3–4):211–407, 2014.
 [Dwork2006] Cynthia Dwork. Differential privacy. In 33rd International Colloquium on Automata, Languages and Programming, part II (ICALP 2006), volume 4052, pages 1–12, Venice, Italy, July 2006. Springer Verlag.
 [Fioretto and Van Hentenryck2019] Ferdinando Fioretto and Pascal Van Hentenryck. Privacypreserving federated data sharing. In Proceedings of the 18th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2019), pages 638–646. International Foundation for Autonomous Agents and Multiagent Systems, 2019.
 [Fredrikson et al.2015] Matt Fredrikson, Somesh Jha, and Thomas Ristenpart. Model inversion attacks that exploit confidence information and basic countermeasures. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pages 1322–1333. ACM, 2015.
 [Geyer et al.2017] Robin C Geyer, Tassilo Klein, and Moin Nabi. Differentially private federated learning: A client level perspective. arXiv preprint arXiv:1712.07557, 2017.
 [Goodfellow et al.2014] Ian Goodfellow, Jean PougetAbadie, Mehdi Mirza, Bing Xu, David WardeFarley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems, pages 2672–2680, 2014.
 [Hitaj et al.2017] Briland Hitaj, Giuseppe Ateniese, and Fernando PérezCruz. Deep models under the gan: information leakage from collaborative deep learning. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pages 603–618. ACM, 2017.
 [Huang et al.2017] Chong Huang, Peter Kairouz, Xiao Chen, Lalitha Sankar, and Ram Rajagopal. Contextaware generative adversarial privacy. Entropy, 19(12):656, 2017.
 [LeCun et al.1998] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradientbased learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
 [Li et al.2007] Ninghui Li, Tiancheng Li, and Suresh Venkatasubramanian. tcloseness: Privacy beyond kanonymity and ldiversity. In Data Engineering, 2007. ICDE 2007. IEEE 23rd International Conference on, pages 106–115. IEEE, 2007.

[Liu et al.2015]
Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang.
Deep learning face attributes in the wild.
In
Proceedings of International Conference on Computer Vision (ICCV)
, 2015.  [Machanavajjhala et al.2007] Ashwin Machanavajjhala, Daniel Kifer, Johannes Gehrke, and Muthuramakrishnan Venkitasubramaniam. ldiversity: Privacy beyond kanonymity. ACM Transactions on Knowledge Discovery from Data (TKDD), 1(1):3, 2007.
 [McMahan et al.2016] H Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, et al. Communicationefficient learning of deep networks from decentralized data. arXiv preprint arXiv:1602.05629, 2016.
 [McMahan et al.2017] H Brendan McMahan, Daniel Ramage, Kunal Talwar, and Li Zhang. Learning differentially private recurrent language models. arXiv preprint arXiv:1710.06963, 2017.
 [Oliphant2006] Travis E Oliphant. A bayesian perspective on estimating mean, variance, and standarddeviation from data. 2006.
 [Papernot et al.2016] Nicolas Papernot, Martín Abadi, Úlfar Erlingsson, Ian Goodfellow, and Kunal Talwar. Semisupervised knowledge transfer for deep learning from private training data. arXiv preprint arXiv:1610.05755, 2016.
 [Papernot et al.2018] Nicolas Papernot, Shuang Song, Ilya Mironov, Ananth Raghunathan, Kunal Talwar, and Úlfar Erlingsson. Scalable private learning with pate. arXiv preprint arXiv:1802.08908, 2018.
 [PérezCruz2008] Fernando PérezCruz. Kullbackleibler divergence estimation of continuous distributions. In Information Theory, 2008. ISIT 2008. IEEE International Symposium on, pages 1666–1670. IEEE, 2008.
 [Schneider and Abowd2015] Matthew J Schneider and John M Abowd. A new method for protecting interrelated time series with bayesian prior distributions and synthetic data. Journal of the Royal Statistical Society: Series A (Statistics in Society), 178(4):963–975, 2015.
 [Shokri and Shmatikov2015] Reza Shokri and Vitaly Shmatikov. Privacypreserving deep learning. In Proceedings of the 22nd ACM SIGSAC conference on computer and communications security, pages 1310–1321. ACM, 2015.
 [Shokri et al.2017] Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. Membership inference attacks against machine learning models. In Security and Privacy (SP), 2017 IEEE Symposium on, pages 3–18. IEEE, 2017.
 [Sweeney2002] Latanya Sweeney. Kanonymity: A model for protecting privacy. Int. J. Uncertain. Fuzziness Knowl.Based Syst., 10(5):557–570, October 2002.

[Szegedy et al.2016]
Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew
Wojna.
Rethinking the inception architecture for computer vision.
In
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
, pages 2818–2826, 2016.  [Triastcyn and Faltings2019] Aleksei Triastcyn and Boi Faltings. Generating artificial data for private deep learning. In Proceedings of the PAL: PrivacyEnhancing Artificial Intelligence and Language Technologies, AAAI Spring Symposium Series, number 2335 in CEUR Workshop Proceedings, pages 33–40, 2019.
 [Wang et al.2016] YuXiang Wang, Jing Lei, and Stephen E Fienberg. Onaverage klprivacy and its equivalence to generalization for maxentropy mechanisms. In International Conference on Privacy in Statistical Databases, pages 121–134. Springer, 2016.
 [Xie et al.2018] Liyang Xie, Kaixiang Lin, Shu Wang, Fei Wang, and Jiayu Zhou. Differentially private generative adversarial network. arXiv preprint arXiv:1802.06739, 2018.
 [Yao1982] Andrew ChiChih Yao. Protocols for secure computations. In FOCS, volume 82, pages 160–164, 1982.
 [Zhang et al.2018] Xinyang Zhang, Shouling Ji, and Ting Wang. Differentially private releasing via deep generative model. arXiv preprint arXiv:1801.01594, 2018.
Comments
There are no comments yet.