Identify Susceptible Locations in Medical Records via Adversarial Attacks on Deep Predictive Models

02/13/2018
by   Mengying Sun, et al.
0

The surging availability of electronic medical records (EHR) leads to increased research interests in medical predictive modeling. Recently many deep learning based predicted models are also developed for EHR data and demonstrated impressive performance. However, a series of recent studies showed that these deep models are not safe: they suffer from certain vulnerabilities. In short, a well-trained deep network can be extremely sensitive to inputs with negligible changes. These inputs are referred to as adversarial examples. In the context of medical informatics, such attacks could alter the result of a high performance deep predictive model by slightly perturbing a patient's medical records. Such instability not only reflects the weakness of deep architectures, more importantly, it offers guide on detecting susceptible parts on the inputs. In this paper, we propose an efficient and effective framework that learns a time-preferential minimum attack targeting the LSTM model with EHR inputs, and we leverage this attack strategy to screen medical records of patients and identify susceptible events and measurements. The efficient screening procedure can assist decision makers to pay extra attentions to the locations that can cause severe consequence if not measured correctly. We conduct extensive empirical studies on a real-world urgent care cohort and demonstrate the effectiveness of the proposed screening approach.

READ FULL TEXT

page 6

page 8

research
06/15/2021

Adversarial Attacks on Deep Models for Financial Transaction Records

Machine learning models using transaction records as inputs are popular ...
research
10/31/2020

Evaluation of Inference Attack Models for Deep Learning on Medical Data

Deep learning has attracted broad interest in healthcare and medical com...
research
06/19/2020

Adversarial Attacks for Multi-view Deep Models

Recent work has highlighted the vulnerability of many deep machine learn...
research
04/14/2020

Extending Adversarial Attacks to Produce Adversarial Class Probability Distributions

Despite the remarkable performance and generalization levels of deep lea...
research
06/17/2019

Scrubbing Sensitive PHI Data from Medical Records made Easy by SpaCy -- A Scalable Model Implementation Comparisons

De-identification of clinical records is an extremely important process ...
research
06/25/2018

Exploring Adversarial Examples: Patterns of One-Pixel Attacks

Failure cases of black-box deep learning, e.g. adversarial examples, mig...
research
09/29/2020

Adversarial Attacks Against Deep Learning Systems for ICD-9 Code Assignment

Manual annotation of ICD-9 codes is a time consuming and error-prone pro...

Please sign up or login with your details

Forgot password? Click here to reset