Defending against Reconstruction Attacks through Differentially Private Federated Learning for Classification of Heterogeneous Chest X-Ray Data

05/06/2022
by   Joceline Ziegler, et al.
0

Privacy regulations and the physical distribution of heterogeneous data are often primary concerns for the development of deep learning models in a medical context. This paper evaluates the feasibility of differentially private federated learning for chest X-ray classification as a defense against privacy attacks on DenseNet121 and ResNet50 network architectures. We simulated a federated environment by distributing images from the public CheXpert and Mendeley chest X-ray datasets unevenly among 36 clients. Both non-private baseline models achieved an area under the ROC curve (AUC) of 0.94 on the binary classification task of detecting the presence of a medical finding. We demonstrate that both model architectures are vulnerable to privacy violation by applying image reconstruction attacks to local model updates from individual clients. The attack was particularly successful during later training stages. To mitigate the risk of privacy breach, we integrated Rényi differential privacy with a Gaussian noise mechanism into local model training. We evaluate model performance and attack vulnerability for privacy budgets ϵ∈ 1, 3, 6, 10. The DenseNet121 achieved the best utility-privacy trade-off with an AUC of 0.94 for ϵ = 6. Model performance deteriorated slightly for individual clients compared to the non-private baseline. The ResNet50 only reached an AUC of 0.76 in the same privacy setting. Its performance was inferior to that of the DenseNet121 for all considered privacy constraints, suggesting that the DenseNet121 architecture is more robust to differentially private training.

READ FULL TEXT

page 11

page 12

page 14

page 17

research
06/01/2022

Federated Learning in Non-IID Settings Aided by Differentially Private Synthetic Data

Federated learning (FL) is a privacy-promoting framework that enables po...
research
09/11/2020

Federated Model Distillation with Noise-Free Differential Privacy

Conventional federated learning directly averaging model weights is only...
research
04/06/2023

Quantifying and Defending against Privacy Threats on Federated Knowledge Graph Embedding

Knowledge Graph Embedding (KGE) is a fundamental technique that extracts...
research
12/20/2017

Differentially Private Federated Learning: A Client Level Perspective

Federated learning is a recent advance in privacy protection. In this co...
research
02/08/2022

Practical Challenges in Differentially-Private Federated Survival Analysis of Medical Data

Survival analysis or time-to-event analysis aims to model and predict th...
research
10/15/2020

Federated Learning in Adversarial Settings

Federated Learning enables entities to collaboratively learn a shared pr...
research
02/02/2023

On the Efficacy of Differentially Private Few-shot Image Classification

There has been significant recent progress in training differentially pr...

Please sign up or login with your details

Forgot password? Click here to reset