When Differential Privacy Meets Interpretability: A Case Study

06/24/2021
by   Rakshit Naidu, et al.
16

Given the increase in the use of personal data for training Deep Neural Networks (DNNs) in tasks such as medical imaging and diagnosis, differentially private training of DNNs is surging in importance and there is a large body of work focusing on providing better privacy-utility trade-off. However, little attention is given to the interpretability of these models, and how the application of DP affects the quality of interpretations. We propose an extensive study into the effects of DP training on DNNs, especially on medical imaging applications, on the APTOS dataset.

READ FULL TEXT

page 3

page 4

research
09/09/2022

Bridging the Gap: Differentially Private Equivariant Deep Learning for Medical Image Analysis

Machine learning with formal privacy-preserving techniques like Differen...
research
01/27/2021

Dopamine: Differentially Private Federated Learning on Medical Data

While rich medical datasets are hosted in hospitals distributed across t...
research
10/07/2022

TAN without a burn: Scaling Laws of DP-SGD

Differentially Private methods for training Deep Neural Networks (DNNs) ...
research
05/16/2018

Regularization Learning Networks

Despite their impressive performance, Deep Neural Networks (DNNs) typica...
research
10/16/2021

DPNAS: Neural Architecture Search for Deep Learning with Differential Privacy

Training deep neural networks (DNNs) for meaningful differential privacy...
research
01/31/2019

AnomiGAN: Generative adversarial networks for anonymizing private medical data

Typical personal medical data contains sensitive information about indiv...
research
03/01/2023

A Deep Neural Architecture for Harmonizing 3-D Input Data Analysis and Decision Making in Medical Imaging

Harmonizing the analysis of data, especially of 3-D image volumes, consi...

Please sign up or login with your details

Forgot password? Click here to reset