Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation

07/30/2019
by   Utku Ozbulak, et al.
1

Deep learning models, which are increasingly being used in the field of medical image analysis, come with a major security risk, namely, their vulnerability to adversarial examples. Adversarial examples are carefully crafted samples that force machine learning models to make mistakes during testing time. These malicious samples have been shown to be highly effective in misguiding classification tasks. However, research on the influence of adversarial examples on segmentation is significantly lacking. Given that a large portion of medical imaging problems are effectively segmentation problems, we analyze the impact of adversarial examples on deep learning-based image segmentation models. Specifically, we expose the vulnerability of these models to adversarial examples by proposing the Adaptive Segmentation Mask Attack (ASMA). This novel algorithm makes it possible to craft targeted adversarial examples that come with (1) high intersection-over-union rates between the target adversarial mask and the prediction and (2) with perturbation that is, for the most part, invisible to the bare eye. We lay out experimental and visual evidence by showing results obtained for the ISIC skin lesion segmentation challenge and the problem of glaucoma optic disc segmentation. An implementation of this algorithm and additional examples can be found at https://github.com/utkuozbulak/adaptive-segmentation-mask-attack.

READ FULL TEXT

page 2

page 5

page 7

research
03/23/2018

Generalizability vs. Robustness: Adversarial Examples for Medical Imaging

In this paper, for the first time, we propose an evaluation method for d...
research
08/03/2022

Multiclass ASMA vs Targeted PGD Attack in Image Segmentation

Deep learning networks have demonstrated high performance in a large var...
research
08/01/2017

Adversarial-Playground: A Visualization Suite Showing How Adversarial Examples Fool Deep Learning

Recent studies have shown that attackers can force deep learning models ...
research
02/01/2020

AdvJND: Generating Adversarial Examples with Just Noticeable Difference

Compared with traditional machine learning models, deep neural networks ...
research
08/14/2020

Efficiently Constructing Adversarial Examples by Feature Watermarking

With the increasing attentions of deep learning models, attacks are also...
research
04/23/2020

Adversarial Machine Learning in Network Intrusion Detection Systems

Adversarial examples are inputs to a machine learning system intentional...
research
12/16/2021

Towards Robust Neural Image Compression: Adversarial Attack and Model Finetuning

Deep neural network based image compression has been extensively studied...

Please sign up or login with your details

Forgot password? Click here to reset