MedAttacker: Exploring Black-Box Adversarial Attacks on Risk Prediction Models in Healthcare

12/11/2021
by   Muchao Ye, et al.
0

Deep neural networks (DNNs) have been broadly adopted in health risk prediction to provide healthcare diagnoses and treatments. To evaluate their robustness, existing research conducts adversarial attacks in the white/gray-box setting where model parameters are accessible. However, a more realistic black-box adversarial attack is ignored even though most real-world models are trained with private data and released as black-box services on the cloud. To fill this gap, we propose the first black-box adversarial attack method against health risk prediction models named MedAttacker to investigate their vulnerability. MedAttacker addresses the challenges brought by EHR data via two steps: hierarchical position selection which selects the attacked positions in a reinforcement learning (RL) framework and substitute selection which identifies substitute with a score-based principle. Particularly, by considering the temporal context inside EHRs, it initializes its RL position selection policy by using the contribution score of each visit and the saliency score of each code, which can be well integrated with the deterministic substitute selection process decided by the score changes. In experiments, MedAttacker consistently achieves the highest average success rate and even outperforms a recent white-box EHR adversarial attack technique in certain cases when attacking three advanced health risk prediction models in the black-box setting across multiple real-world datasets. In addition, based on the experiment results we include a discussion on defending EHR adversarial attacks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/02/2018

Hardening Deep Neural Networks via Adversarial Model Cascades

Deep neural networks (DNNs) have been shown to be vulnerable to adversar...
research
04/10/2019

Black-box Adversarial Attacks on Video Recognition Models

Deep neural networks (DNNs) are known for their vulnerability to adversa...
research
10/09/2021

Provably Efficient Black-Box Action Poisoning Attacks Against Reinforcement Learning

Due to the broad range of applications of reinforcement learning (RL), u...
research
07/21/2019

Open DNN Box by Power Side-Channel Attack

Deep neural networks are becoming popular and important assets of many A...
research
11/02/2021

Meta-Learning the Search Distribution of Black-Box Random Search Based Adversarial Attacks

Adversarial attacks based on randomized search schemes have obtained sta...
research
07/27/2021

Towards Black-box Attacks on Deep Learning Apps

Deep learning is a powerful weapon to boost application performance in m...
research
06/09/2020

Black-Box Adversarial Attacks on Graph Neural Networks with Limited Node Access

We study the black-box attacks on graph neural networks (GNNs) under a n...

Please sign up or login with your details

Forgot password? Click here to reset