A Hierarchical Feature Constraint to Camouflage Medical Adversarial Attacks

12/17/2020
by   Qingsong Yao, et al.
0

Deep neural networks (DNNs) for medical images are extremely vulnerable to adversarial examples (AEs), which poses security concerns on clinical decision making. Luckily, medical AEs are also easy to detect in hierarchical feature space per our study herein. To better understand this phenomenon, we thoroughly investigate the intrinsic characteristic of medical AEs in feature space, providing both empirical evidence and theoretical explanations for the question: why are medical adversarial attacks easy to detect? We first perform a stress test to reveal the vulnerability of deep representations of medical images, in contrast to natural images. We then theoretically prove that typical adversarial attacks to binary disease diagnosis network manipulate the prediction by continuously optimizing the vulnerable representations in a fixed direction, resulting in outlier features that make medical AEs easy to detect. However, this vulnerability can also be exploited to hide the AEs in the feature space. We propose a novel hierarchical feature constraint (HFC) as an add-on to existing adversarial attacks, which encourages the hiding of the adversarial representation within the normal feature distribution. We evaluate the proposed method on two public medical image datasets, namely Fundoscopy and Chest X-Ray. Experimental results demonstrate the superiority of our adversarial attack method as it bypasses an array of state-of-the-art adversarial detectors more easily than competing attack methods, supporting that the great vulnerability of medical features allows an attacker more room to manipulate the adversarial representations.

READ FULL TEXT

page 12

page 14

research
07/24/2019

Understanding Adversarial Attacks on Deep Learning Based Medical Image Analysis Systems

Deep neural networks (DNNs) have become popular for medical image analys...
research
08/04/2022

Self-Ensembling Vision Transformer (SEViT) for Robust Medical Image Classification

Vision Transformers (ViT) are competing to replace Convolutional Neural ...
research
04/19/2021

Removing Adversarial Noise in Class Activation Feature Space

Deep neural networks (DNNs) are vulnerable to adversarial noise. Preproc...
research
03/09/2021

Stabilized Medical Image Attacks

Convolutional Neural Networks (CNNs) have advanced existing medical syst...
research
11/22/2021

Medical Aegis: Robust adversarial protectors for medical images

Deep neural network based medical image systems are vulnerable to advers...
research
11/21/2020

Spatially Correlated Patterns in Adversarial Images

Adversarial attacks have proved to be the major impediment in the progre...
research
06/11/2021

Adversarial Robustness through the Lens of Causality

The adversarial vulnerability of deep neural networks has attracted sign...

Please sign up or login with your details

Forgot password? Click here to reset