Federated and Differentially Private Learning for Electronic Health Records

11/13/2019 ∙ by Stephen R. Pfohl, et al. ∙ Google Stanford University 10

The use of collaborative and decentralized machine learning techniques such as federated learning have the potential to enable the development and deployment of clinical risk predictions models in low-resource settings without requiring sensitive data be shared or stored in a central repository. This process necessitates communication of model weights or updates between collaborating entities, but it is unclear to what extent patient privacy is compromised as a result. To gain insight into this question, we study the efficacy of centralized versus federated learning in both private and non-private settings. The clinical prediction tasks we consider are the prediction of prolonged length of stay and in-hospital mortality across thirty one hospitals in the eICU Collaborative Research Database. We find that while it is straightforward to apply differentially private stochastic gradient descent to achieve strong privacy bounds when training in a centralized setting, it is considerably more difficult to do so in the federated setting.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The availability of high quality public clinical data sets (Johnson et al., 2016; Pollard et al., 2018) has greatly accelerated research into the use of machine learning for the development of clinical decision support tools. However, the majority of clinical data remain in private silos and are broadly unavailable for research due to concerns over patient privacy, inhibiting the collaborative development of high fidelity predictive models across institutions. Additionally, standard de-identification protocols provide limited safety guarantees against sophisticated re-identification attacks (El Emam et al., 2011; Gkoulalas-Divanis et al., 2014; Kleppner and Sharp, 2009). Furthermore, patient privacy may be violated even in the case where no raw data is shared with downstream parties, as trained machine learning models are susceptible to membership inference attacks (Shokri et al., 2017), model inversion Fredrikson et al. (2015), and training data extraction Carlini et al. (2018).

In line with recent work Beaulieu-Jones et al. (2018); Vepakomma et al. (2018b), we investigate the extent to which several hospitals can collaboratively train clinical risk prediction models with formal privacy guarantees without sharing data. In particular, we employ federated averaging McMahan et al. (2017) and differentially private stochastic gradient descent McMahan et al. (2017, 2018); Abadi et al. (2016) to train models for in-hospital mortality and prolonged length of stay prediction across thirty one hospitals in the eICU Collaborative Research Database (eICU-CRD) Pollard et al. (2018).

1.1 Federated Learning

Federated learning McMahan et al. (2017) is a general technique for decentralized optimization across a collection of entities without sharing data, typically employed for training machine learning models on mobile devices. In the variant known as federated averaging

, each entity trains a local model for a fixed number of epochs over the local training data and transfers the resulting weights to a central server. The server returns the average of the weights to each entity and the process repeats. This satisfies an intuitive notion of privacy, since no entity shares data with the central server or with any other entity. However, federated learning alone provides no formal accounting for the privacy cost incurred via the communication of local model weights with the central server.

1.2 Differential Privacy

Formally, a randomized algorithm : with domain and range satisfies (, ) differential privacy Dwork et al. (2014) if for any two adjacent data sets , and for any subset of outputs ,

(1)

In our case, the randomized algorithm we consider is differentially private stochastic gradient descent (DP-SGD) (Abadi et al., 2016; McMahan et al., 2018). Here, adjacent data sets ,

are defined by adding, removing, and modifying the data for one record. This formulation can be informally interpreted as one where the inclusion of a record does not affect the probability distribution over learned model weights by more than a factor

, where bounds the probability of the restriction not holding. Notably, this notion allows us to bound and quantify the capability for an adversary to determine whether a record belonged to the training data set, regardless of their access to auxiliary information Dwork et al. (2014).

In practice, stochastic gradient descent can be made differentially private if the record-level gradients are clipped to a maximum norm

and the Gaussian noise with standard deviation

added to the mean of the clipped gradients McMahan et al. (2018)

over a batch of training data. The privacy loss over the procedure may then be accounted for with the moments accountant

Abadi et al. (2016); McMahan et al. (2018) and Renyi differential privacy (Mironov, 2017). In this setting, the privacy cost of a training procedure is fully specified by the noise multiplier , the ratio of the batch size to the training set size, and the number of training steps McMahan et al. (2018). McMahan et al. (2017) demonstrate that it is straightforward to formulate federated learning in a way that is conducive to differentially private training if DP-SGD is used as the local optimization algorithm.

1.3 Related Work

Our work is most similar to Beaulieu-Jones et al. (2018) in that they also investigate decentralized and differentially private machine learning in the context of mortality prediction in the context of the eICU-CRD, but use cyclical weight transfer Chang et al. (2018) rather than federated averaging for distributed optimization. Another related technique is split learning Gupta and Raskar (2018); Vepakomma et al. (2018a, )

where the layers of a neural network are partitioned across several entities, enabling learning across entities that may contribute different data modalities without exposing the raw data or the local network architecture. As an alternative, recent work

Beaulieu-Jones et al. (2019); Xie et al. (2018) has proposed the use of differentially private generative models to publicly release synthetic data with privacy guarantees.

width=0.95 Prolonged Length of Stay Hospital Mortality Hosp. ID (N) Local Abs. AUC-ROC Central Rel. AUC-ROC Federated Rel. AUC-ROC Local Abs. AUC-ROC Central Rel. AUC-ROC Federated Rel. AUC-ROC 73 (4,381) 0.803 (0.761, 0.845) 0.018 (-0.005, 0.042) 0.030 (0.012, 0.047) 0.791 (0.693, 0.890) 0.049 (0.007, 0.090) 0.025 (-0.007, 0.057) 264 (3,875) 0.631 (0.571, 0.690) 0.058 (0.010, 0.106) 0.067 (0.021, 0.114) 0.846 (0.785, 0.908) 0.015 (-0.015, 0.046) -0.007 (-0.048, 0.034) 420 (3,167) 0.707 (0.648, 0.766) 0.038 (0.003, 0.073) 0.016 (-0.014, 0.046) 0.811 (0.729, 0.894) 0.036 (-0.006, 0.079) 0.019 (-0.017, 0.055) 338 (3,139) 0.648 (0.584, 0.712) 0.098 (0.047, 0.149) 0.105 (0.056, 0.154) 0.820 (0.722, 0.919) 0.042 (-0.011, 0.094) 0.047 (0.008, 0.085) 243 (3,026) 0.664 (0.595, 0.732) 0.035 (0.003, 0.068) 0.022 (-0.008, 0.052) 0.910 (0.864, 0.956) -0.006 (-0.035, 0.023) -0.025 (-0.066, 0.016) 458 (2,723) 0.739 (0.679, 0.798) 0.042 (0.006, 0.078) 0.039 (0.004, 0.075) 0.840 (0.754, 0.927) 0.049 (-0.034, 0.131) 0.030 (-0.034, 0.094) 167 (2,680) 0.773 (0.715, 0.831) 0.010 (-0.022, 0.042) -0.006 (-0.039, 0.028) 0.845 (0.777, 0.914) 0.031 (-0.016, 0.078) 0.012 (-0.023, 0.048) 300 (2,678) 0.718 (0.652, 0.783) 0.040 (-0.002, 0.082) 0.023 (-0.009, 0.055) 0.699 (0.557, 0.841) 0.051 (-0.014, 0.116) 0.042 (-0.027, 0.112) 443 (2,666) 0.700 (0.637, 0.763) 0.037 (-0.002, 0.075) 0.052 (0.013, 0.090) 0.881 (0.816, 0.945) 0.001 (-0.040, 0.043) -0.019 (-0.052, 0.013) 188 (2,591) 0.773 (0.716, 0.830) 0.014 (-0.021, 0.049) 0.011 (-0.022, 0.044) 0.850 (0.783, 0.917) 0.013 (-0.041, 0.067) -0.003 (-0.062, 0.056) 208 (2,484) 0.663 (0.595, 0.730) 0.090 (0.040, 0.140) 0.044 (0.010, 0.077) 0.717 (0.596, 0.838) 0.114 (0.007, 0.220) 0.061 (-0.037, 0.159) 252 (2,449) 0.802 (0.748, 0.855) 0.026 (-0.011, 0.063) 0.011 (-0.022, 0.044) 0.829 (0.738, 0.920) 0.049 (-0.024, 0.122) 0.037 (-0.040, 0.113) 199 (2,215) 0.760 (0.695, 0.825) 0.016 (-0.024, 0.056) 0.015 (-0.026, 0.055) 0.838 (0.758, 0.918) 0.033 (-0.023, 0.089) 0.023 (-0.020, 0.066) 122 (2,103) 0.681 (0.608, 0.755) -0.011 (-0.064, 0.042) 0.001 (-0.044, 0.046) 0.730 (0.605, 0.855) 0.049 (-0.014, 0.112) 0.045 (-0.010, 0.099) 176 (1,942) 0.696 (0.618, 0.775) 0.066 (0.007, 0.124) 0.056 (0.019, 0.092) 0.886 (0.819, 0.954) 0.039 (-0.009, 0.086) 0.039 (-0.003, 0.081) 281 (1,783) 0.620 (0.528, 0.712) 0.169 (0.093, 0.244) 0.084 (0.030, 0.137) 0.779 (0.605, 0.953) 0.101 (0.006, 0.195) 0.129 (0.005, 0.252) 411 (1,747) 0.726 (0.647, 0.806) -0.011 (-0.057, 0.035) -0.027 (-0.067, 0.013) 0.925 (0.875, 0.975) -0.039 (-0.107, 0.029) -0.010 (-0.048, 0.027) 413 (1,730) 0.709 (0.623, 0.795) 0.025 (-0.028, 0.078) -0.008 (-0.058, 0.043) 0.809 (0.668, 0.951) 0.062 (-0.045, 0.169) -0.146 (-0.303, 0.011) 449 (1,613) 0.801 (0.730, 0.872) 0.051 (0.014, 0.088) 0.052 (0.016, 0.088) 0.854 (0.777, 0.931) 0.031 (-0.016, 0.077) 0.014 (-0.019, 0.048) 394 (1509) 0.747 (0.666, 0.828) -0.024 (-0.084, 0.037) -0.012 (-0.051, 0.027) 0.896 (0.811, 0.981) 0.027 (-0.044, 0.097) -0.010 (-0.054, 0.033) 283 (1,478) 0.603 (0.508, 0.698) 0.084 (0.011, 0.156) 0.074 (0.012, 0.136) 0.807 (0.684, 0.929) 0.082 (-0.029, 0.192) 0.038 (-0.056, 0.132) 307 (1,433) 0.639 (0.543, 0.735) 0.001 (-0.070, 0.072) 0.025 (-0.034, 0.084) 0.857 (0.750, 0.964) 0.026 (-0.056, 0.108) 0.029 (-0.041, 0.098) 331 (1,397) 0.668 (0.499, 0.836) 0.091 (-0.020, 0.203) 0.065 (-0.007, 0.137) 0.431 (0.204, 0.658) 0.409 (0.076, 0.742) 0.308 (-0.028, 0.644) 148 (1,386) 0.791 (0.717, 0.866) -0.025 (-0.080, 0.029) -0.025 (-0.083, 0.032) 0.815 (0.566, 1.000) 0.062 (-0.102, 0.225) 0.031 (-0.067, 0.130) 345 (1,372) 0.688 (0.599, 0.778) 0.054 (-0.020, 0.128) 0.040 (-0.028, 0.109) 0.769 (0.605, 0.934) 0.086 (-0.066, 0.237) 0.080 (-0.067, 0.227) 417 (1,369) 0.716 (0.629, 0.804) 0.082 (0.018, 0.147) 0.054 (-0.007, 0.116) 0.687 (0.518, 0.856) 0.189 (0.018, 0.360) 0.048 (-0.071, 0.167) 165 (1,336) 0.635 (0.536, 0.735) 0.024 (-0.043, 0.092) 0.045 (-0.012, 0.101) 0.595 (0.425, 0.765) 0.337 (0.186, 0.487) 0.313 (0.175, 0.452) 248 (1,334) 0.730 (0.641, 0.819) 0.009 (-0.048, 0.065) 0.036 (-0.011, 0.083) 0.777 (0.559, 0.995) -0.012 (-0.099, 0.076) -0.033 (-0.081, 0.016) 416 (1,330) 0.739 (0.648, 0.830) 0.073 (0.012, 0.134) 0.050 (0.004, 0.096) 0.675 (0.391, 0.959) 0.203 (0.003, 0.403) 0.220 (0.048, 0.393) 110 (1,305) 0.673 (0.565, 0.781) 0.097 (0.001, 0.192) 0.075 (0.004, 0.147) 0.947 (0.871, 1.000) 0.008 (-0.017, 0.033) -0.028 (-0.079, 0.024) 183 (1,268) 0.746 (0.656, 0.835) 0.037 (-0.033, 0.108) 0.024 (-0.045, 0.093) 0.802 (0.627, 0.977) -0.018 (-0.126, 0.090) -0.002 (-0.059, 0.056)

Table 1:

Comparison of model performance for local, central, and federated training without differentially private training for each hospital and prediction task. Results shown are the estimates and 95% confidence intervals for the AUC-ROC for local training and the relative difference in central and federated AUC-ROC compared to local training. Bold indicates a statistically significant improvement over local training on the basis of zero not being contained within the confidence interval for the difference in the AUC-ROC relative to the local model.

2 Methods

All experiments are based on data derived from the eICU Collaborative Research Database Pollard et al. (2018), a freely and publicly available intensive care database containing data from 139,367 unique patients admitted between 2014 and 2015 to 208 unique hospitals. Each patient may have one or more recorded hospital admissions, each composed of one or more ICU stays.

We make predictions at 24 hours into hospital admissions that last at least 24 hours. We assign binary outcome labels for in-hospital mortality and prolonged length of stay if the patient dies during the remainder of the hospital admission or if the admission last longer than 7 days, respectively.

To construct a training set for supervised learning, we first partition the set of admissions by hospital and then split the data within each hospital by patient such that 80%, 10%, and 10% of the patients are used for training, validation, and testing, respectively. We allow for multiple hospital admissions per patient, but no patient exists in more than one partition within the same hospital. We retain all hospitals with greater than 1,000 hospital admissions in its corresponding training data set. This procedure produces a cohort of 65,509 labeled hospital admissions across 31 unique hospitals. The incidence of in-hospital mortality and prolonged length of stay in the aggregate population is 7.3% and 34.4%, respectively.

We construct a feature representation as a function of data recorded within each hospital stay up to 24 hours into the stay. We extract all lab orders, lab results, medication orders, diagnoses, and active treatments, as well as the patient age at admission, gender, ethnicity, unit type, and admission source. Lab results and age are binned into three and four bins, respectively. We aggregate over time, assigning a one for each feature if it is observed anywhere in the admission prior to 24 hours and a zero otherwise.

For all supervised learning tasks, we consider only logistic regression and feedforward networks with one hidden layer. We perform model selection on the basis of the area under the receiver operating curve (AUC-ROC) evaluated on the corresponding validation set following a grid search over relevant hyperparameters. Model performance is reported as the 95% confidence interval of the AUC-ROC on the corresponding test set derived via DeLong’s Method

DeLong et al. (1988). We similarly derive confidence intervals for the difference in the AUC-ROC between models to facilitate model comparisons.111It should be noted that this procedure produces a confidence interval for the difference in the AUC-ROC between models, taking into account the correlated nature of the predictions made by two models. The Adam Kingma and Ba (2014) optimizer is used in each case.

2.1 Experimental Design

We conduct a series of experiments designed to evaluate the relative benefits of centralized and federated learning, and the associated privacy costs, over learning using only local data at each hospital. We evaluate the following experimental conditions:

  • Local training with no collaboration. We identify a high performing model for each hospital using only data from that hospital following a grid search over learning rates, batch size, and hidden layer size if the model is a feedforward network.

  • Centralized training. We simulate the setting where all of the records are available in a central repository, selecting the best global model on the basis of the performance on the aggregated records and evaluate the model on the local data from each hospital.

  • Centralized training with differential privacy. We modify the centralized training procedure to use DP-SGD for optimization McMahan et al. (2018). Here we additionally search over the discrete grid of [0.1, 1, 10] for both the noise multiplier

    and the gradient clipping threshold

    . We assess privacy in terms of the that results from training with a fixed .

  • Federated learning. We employ the federated averaging algorithm described in McMahan et al. (2017). For each round of federated learning, we conduct one epoch of training using the local data at each hospital and then synchronize the weights across all hospitals with an average. We maintain a record of the local performance at each hospital over the federated learning procedure and perform local model selection on the basis of the best validation AUC-ROC observed over the procedure. Model selection for the best federated hyperparameters is determined on the basis of the best mean local validation AUC-ROC across hospitals.

  • Federated learning with differential privacy. We repeat the federated averaging experiment as previously described, but use DP-SGD as the local optimizer at each hospital, similar to the algorithm described in McMahan et al. (2017). We experiment with fixed global DP-SGD hyperparameters and with local hyperparameters selected independently at each hospital. For the local hyperparameter search at each hospital, we use , , and selected log uniformly from , performing model selection on the basis of the DP-SGD hyperparameters that maximize local AUC-ROC in ten epochs of training without any collaboration. We then perform federated learning for ten rounds with the selected local DP-SGD hyper-parameters.

3 Results and Discussion

Prior to experimentation with differentially private training, we aimed to establish the efficacy of federated learning over centralized and local learning. We find that while there is often a benefit to federated learning over local learning, often attaining an AUC-ROC comparable with that of centralized learning, the improvements are often not large enough to be rendered statistically significant on the basis of the 95% confidence interval for the difference in AUC-ROC between either the central or federated model with the corresponding local model (Table 1). In particular, centralized and federated learning for prediction of prolonged length of stay improve on local learning for thirteen and twelve hospitals, respectively, whereas centralized and federated learning only benefit mortality prediction in seven and five cases, respectively.

When the records from all hospitals are aggregated for differentially private centralized training, it is feasible to attain relatively strong privacy guarantees () if and (Figure 1) with a relatively minor reduction in terms of the validation AUC-ROC at the end of training (prolonged length of stay 0.763 vs. 0.73; mortality 0.876 vs. 0.832). When attempting to perform federated learning in a differentially private manner, we find that even with DP-SGD hyperparameters selected on the basis of local training, the models derived from differentially private federated learning often perform poorly in terms of both AUC-ROC and , and that this effect is exacerbated for mortality prediction (Table S1). It is likely that a practical tuning strategy for differentially private federated averaging could be identified with further experimentation, but it is unclear if such a strategy would generalize to similar data sets and prediction tasks. This is problematic, for both this and related work, as neglecting to account for the privacy cost of model selection produces optimistic underestimates of the privacy costs Liu and Talwar (2018); Chaudhuri and Vinterbo (2013). In future work, it is of interest to conduct controlled experiments to directly compare our approach to cyclical weight transfer Beaulieu-Jones et al. (2018) and split learning Gupta and Raskar (2018); Vepakomma et al. (2018a, ) to gain insight into the relative efficacy of differentially private federated averaging over alternatives.

Figure 1: Trade-off between the differential privacy and validation AUC-ROC for , over a training procedure of 25 epochs.

Acknowledgments

We thank Michaela Hardt and Abhradeep Thakurta for valuable mentorship and feedback. We further thank Steve Chien and all contributors to the Tensorflow Privacy project for enabling this work.

References

  • M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, and L. Zhang (2016) Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, CCS ’16, New York, NY, USA, pp. 308–318. External Links: ISBN 978-1-4503-4139-4, Link, Document Cited by: §1.2, §1.2, §1.
  • B. K. Beaulieu-Jones, Z. S. Wu, C. Williams, R. Lee, S. P. Bhavnani, J. B. Byrd, and C. S. Greene (2019) Privacy-preserving generative deep neural networks support clinical data sharing. Circulation: Cardiovascular Quality and Outcomes 12 (7), pp. e005122. Cited by: §1.3.
  • B. K. Beaulieu-Jones, W. Yuan, S. G. Finlayson, and Z. S. Wu (2018) Privacy-Preserving Distributed Deep Learning for Clinical Data. External Links: 1812.01484, Link Cited by: §1.3, §1, §3.
  • N. Carlini, C. Liu, J. Kos, Ú. Erlingsson, and D. Song (2018) The Secret Sharer: Measuring Unintended Neural Network Memorization & Extracting Secrets. External Links: 1802.08232, Link Cited by: §1.
  • K. Chang, N. Balachandar, C. Lam, D. Yi, J. Brown, A. Beers, B. Rosen, D. L. Rubin, and J. Kalpathy-Cramer (2018) Distributed deep learning networks among institutions for medical imaging. Journal of the American Medical Informatics Association : JAMIA 25 (8), pp. 945–954 (eng). External Links: Document, ISSN 1527-974X, Link Cited by: §1.3.
  • K. Chaudhuri and S. A. Vinterbo (2013) A stability-based validation procedure for differentially private machine learning. In Advances in Neural Information Processing Systems, pp. 2652–2660. Cited by: §3.
  • E. R. DeLong, D. M. DeLong, and D. L. Clarke-Pearson (1988)

    Comparing the areas under two or more correlated receiver operating characteristic curves: a nonparametric approach.

    .
    Biometrics 44 (3), pp. 837–45. External Links: ISSN 0006-341X, Link Cited by: §2.
  • C. Dwork, A. Roth, et al. (2014) The algorithmic foundations of differential privacy. Foundations and Trends® in Theoretical Computer Science 9 (3–4), pp. 211–407. Cited by: §1.2, §1.2.
  • K. El Emam, E. Jonker, L. Arbuckle, and B. Malin (2011) A systematic review of re-identification attacks on health data. PloS one 6 (12), pp. e28071. Cited by: §1.
  • M. Fredrikson, S. Jha, and T. Ristenpart (2015) Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security - CCS ’15, New York, New York, USA, pp. 1322–1333. External Links: Document, ISBN 9781450338325, Link Cited by: §1.
  • A. Gkoulalas-Divanis, G. Loukides, and J. Sun (2014) Publishing data from electronic health records while preserving privacy: a survey of algorithms. Journal of biomedical informatics 50, pp. 4–19. Cited by: §1.
  • O. Gupta and R. Raskar (2018) Distributed learning of deep neural network over multiple agents. Journal of Network and Computer Applications 116, pp. 1–8. Cited by: §1.3, §3.
  • A. E. Johnson, T. J. Pollard, L. Shen, H. L. Li-wei, M. Feng, M. Ghassemi, B. Moody, P. Szolovits, L. A. Celi, and R. G. Mark (2016) MIMIC-iii, a freely accessible critical care database. Scientific data 3, pp. 160035. Cited by: §1.
  • D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §2.
  • D. Kleppner and P. Sharp (2009) Committee on ensuring the utility and integrity of research data in a digital age. National Academy of Sciences, pp. 4. Cited by: §1.
  • J. Liu and K. Talwar (2018) Private Selection from Private Candidates. External Links: 1811.07971, Link Cited by: §3.
  • B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas (2017) Communication-Efficient Learning of Deep Networks from Decentralized Data. In

    Proceedings of the 20th International Conference on Artificial Intelligence and Statistics

    , A. Singh and J. Zhu (Eds.),
    Proceedings of Machine Learning Research, Vol. 54, Fort Lauderdale, FL, USA, pp. 1273–1282. External Links: Link Cited by: §1.1, §1, 4th item.
  • H. B. McMahan, G. Andrew, U. Erlingsson, S. Chien, I. Mironov, N. Papernot, and P. Kairouz (2018) A General Approach to Adding Differential Privacy to Iterative Training Procedures. External Links: 1812.06210, Link Cited by: §1.2, §1.2, §1, 3rd item.
  • H. B. McMahan, D. Ramage, K. Talwar, and L. Zhang (2017) Learning Differentially Private Recurrent Language Models. External Links: 1710.06963, Link Cited by: §1.2, §1, 5th item.
  • I. Mironov (2017) Rényi differential privacy. In 2017 IEEE 30th Computer Security Foundations Symposium (CSF), Vol. , pp. 263–275. External Links: Document, ISSN Cited by: §1.2.
  • T. J. Pollard, A. E. Johnson, J. D. Raffa, L. A. Celi, R. G. Mark, and O. Badawi (2018) The eicu collaborative research database, a freely available multi-center database for critical care research. Scientific data 5. Cited by: §1, §1, §2.
  • R. Shokri, M. Stronati, C. Song, and V. Shmatikov (2017) Membership inference attacks against machine learning models. In 2017 IEEE Symposium on Security and Privacy (SP), pp. 3–18. Cited by: §1.
  • [23] P. Vepakomma, O. Gupta, A. Dubey, and R. Raskar Reducing leakage in distributed deep learning for sensitive health data. Cited by: §1.3, §3.
  • P. Vepakomma, O. Gupta, T. Swedish, and R. Raskar (2018a) Split learning for health: Distributed deep learning without sharing raw patient data. External Links: 1812.00564, Link Cited by: §1.3, §3.
  • P. Vepakomma, T. Swedish, R. Raskar, O. Gupta, and A. Dubey (2018b) No Peek: A Survey of private distributed deep learning. External Links: 1812.03288, Link Cited by: §1.
  • L. Xie, K. Lin, S. Wang, F. Wang, and J. Zhou (2018) Differentially Private Generative Adversarial Network. External Links: 1802.06739, Link Cited by: §1.3.