Evaluation of Inference Attack Models for Deep Learning on Medical Data

10/31/2020
by   Maoqiang Wu, et al.
0

Deep learning has attracted broad interest in healthcare and medical communities. However, there has been little research into the privacy issues created by deep networks trained for medical applications. Recently developed inference attack algorithms indicate that images and text records can be reconstructed by malicious parties that have the ability to query deep networks. This gives rise to the concern that medical images and electronic health records containing sensitive patient information are vulnerable to these attacks. This paper aims to attract interest from researchers in the medical deep learning community to this important problem. We evaluate two prominent inference attack models, namely, attribute inference attack and model inversion attack. We show that they can reconstruct real-world medical images and clinical reports with high fidelity. We then investigate how to protect patients' privacy using defense mechanisms, such as label perturbation and model perturbation. We provide a comparison of attack results between the original and the medical deep learning models with defenses. The experimental evaluations show that our proposed defense approaches can effectively reduce the potential privacy leakage of medical deep learning from the inference attacks.

READ FULL TEXT

page 13

page 15

research
11/22/2021

Medical Aegis: Robust adversarial protectors for medical images

Deep neural network based medical image systems are vulnerable to advers...
research
07/20/2020

AdvFoolGen: Creating Persistent Troubles for Deep Classifiers

Researches have shown that deep neural networks are vulnerable to malici...
research
02/13/2018

Identify Susceptible Locations in Medical Records via Adversarial Attacks on Deep Predictive Models

The surging availability of electronic medical records (EHR) leads to in...
research
02/18/2022

Resurrecting Trust in Facial Recognition: Mitigating Backdoor Attacks in Face Recognition to Prevent Potential Privacy Breaches

Biometric data, such as face images, are often associated with sensitive...
research
11/18/2022

Leveraging Algorithmic Fairness to Mitigate Blackbox Attribute Inference Attacks

Machine learning (ML) models have been deployed for high-stakes applicat...
research
04/13/2022

CapillaryX: A Software Design Pattern for Analyzing Medical Images in Real-time using Deep Learning

Recent advances in digital imaging, e.g., increased number of pixels cap...
research
06/07/2022

Data Stealing Attack on Medical Images: Is it Safe to Export Networks from Data Lakes?

In privacy-preserving machine learning, it is common that the owner of t...

Please sign up or login with your details

Forgot password? Click here to reset