P3SGD: Patient Privacy Preserving SGD for Regularizing Deep CNNs in Pathological Image Classification

05/30/2019
by   Bingzhe Wu, et al.
0

Recently, deep convolutional neural networks (CNNs) have achieved great success in pathological image classification. However, due to the limited number of labeled pathological images, there are still two challenges to be addressed: (1) overfitting: the performance of a CNN model is undermined by the overfitting due to its huge amounts of parameters and the insufficiency of labeled training data. (2) privacy leakage: the model trained using a conventional method may involuntarily reveal the private information of the patients in the training dataset. The smaller the dataset, the worse the privacy leakage. To tackle the above two challenges, we introduce a novel stochastic gradient descent (SGD) scheme, named patient privacy preserving SGD (P3SGD), which performs the model update of the SGD in the patient level via a large-step update built upon each patient's data. Specifically, to protect privacy and regularize the CNN model, we propose to inject the well-designed noise into the updates. Moreover, we equip our P3SGD with an elaborated strategy to adaptively control the scale of the injected noise. To validate the effectiveness of P3SGD, we perform extensive experiments on a real-world clinical dataset and quantitatively demonstrate the superior ability of P3SGD in reducing the risk of overfitting. We also provide a rigorous analysis of the privacy cost under differential privacy. Additionally, we find that the models trained with P3SGD are resistant to the model-inversion attack compared with those trained using non-private SGD.

READ FULL TEXT
research
10/30/2021

Dynamic Differential-Privacy Preserving SGD

Differentially-Private Stochastic Gradient Descent (DP-SGD) prevents tra...
research
11/27/2021

Towards Understanding the Impact of Model Size on Differential Private Classification

Differential privacy (DP) is an essential technique for privacy-preservi...
research
09/07/2022

On the utility and protection of optimization with differential privacy and classic regularization techniques

Nowadays, owners and developers of deep learning models must consider st...
research
02/16/2022

Measuring Unintended Memorisation of Unique Private Features in Neural Networks

Neural networks pose a privacy risk to training data due to their propen...
research
09/30/2018

Privacy-preserving Stochastic Gradual Learning

It is challenging for stochastic optimizations to handle large-scale sen...
research
03/20/2021

DataLens: Scalable Privacy Preserving Training via Gradient Compression and Aggregation

Recent success of deep neural networks (DNNs) hinges on the availability...
research
08/09/2022

Combining Variational Modeling with Partial Gradient Perturbation to Prevent Deep Gradient Leakage

Exploiting gradient leakage to reconstruct supposedly private training d...

Please sign up or login with your details

Forgot password? Click here to reset