Understanding Adversarial Attacks on Deep Learning Based Medical Image Analysis Systems

07/24/2019
by   Xingjun Ma, et al.
6

Deep neural networks (DNNs) have become popular for medical image analysis tasks like cancer diagnosis and lesion detection. However, a recent study demonstrates that medical deep learning systems can be compromised by carefully-engineered adversarial examples/attacks, i.e., small imperceptible perturbations can fool DNNs to predict incorrectly. This raises safety concerns about the deployment of deep learning systems in clinical settings. In this paper, we provide a deeper understanding of adversarial examples in the context of medical images. We find that medical DNN models can be more vulnerable to adversarial attacks compared to natural ones from three different viewpoints: 1) medical image DNNs that have only a few classes are generally easier to be attacked; 2) the complex biological textures of medical images may lead to more vulnerable regions; and most importantly, 3) state-of-the-art deep networks designed for large-scale natural image processing can be overparameterized for medical imaging tasks and result in high vulnerability to adversarial attacks. Surprisingly, we also find that medical adversarial attacks can be easily detected, i.e., simple detectors can achieve over 98 state-of-the-art attacks, due to their fundamental feature difference from normal examples. We show this is because adversarial attacks tend to attack a wide spread area outside the pathological regions, which results in deep features that are fundamentally different and easily separable from normal features. We believe these findings may be a useful basis to approach the design of secure medical deep learning systems.

READ FULL TEXT

page 3

page 6

page 7

page 9

page 11

research
04/15/2018

Adversarial Attacks Against Medical Deep Learning Systems

The discovery of adversarial examples has raised concerns about the prac...
research
12/17/2020

A Hierarchical Feature Constraint to Camouflage Medical Adversarial Attacks

Deep neural networks (DNNs) for medical images are extremely vulnerable ...
research
06/24/2020

Defending against adversarial attacks on medical imaging AI system, classification or detection?

Medical imaging AI systems such as disease classification and segmentati...
research
03/09/2021

Stabilized Medical Image Attacks

Convolutional Neural Networks (CNNs) have advanced existing medical syst...
research
06/25/2018

Exploring Adversarial Examples: Patterns of One-Pixel Attacks

Failure cases of black-box deep learning, e.g. adversarial examples, mig...
research
01/21/2022

The Security of Deep Learning Defences for Medical Imaging

Deep learning has shown great promise in the domain of medical image ana...
research
04/05/2021

Jekyll: Attacking Medical Image Diagnostics using Deep Generative Models

Advances in deep neural networks (DNNs) have shown tremendous promise in...

Please sign up or login with your details

Forgot password? Click here to reset