Influence of Control Parameters and the Size of Biomedical Image Datasets on the Success of Adversarial Attacks

04/15/2019
by   Vassili Kovalev, et al.
0

In this paper, we study dependence of the success rate of adversarial attacks to the Deep Neural Networks on the biomedical image type, control parameters, and image dataset size. With this work, we are going to contribute towards accumulation of experimental results on adversarial attacks for the community dealing with biomedical images. The white-box Projected Gradient Descent attacks were examined based on 8 classification tasks and 13 image datasets containing a total of 605,080 chest X-ray and 317,000 histology images of malignant tumors. We concluded that: (1) An increase of the amplitude of perturbation in generating malicious adversarial images leads to a growth of the fraction of successful attacks for the majority of image types examined in this study. (2) Histology images tend to be less sensitive to the growth of amplitude of adversarial perturbations. (3) Percentage of successful attacks is growing with an increase of the number of iterations of the algorithm of generating adversarial perturbations with an asymptotic stabilization. (4) It was found that the success of attacks dropping dramatically when the original confidence of predicting image class exceeds 0.95. (5) The expected dependence of the percentage of successful attacks on the size of image training set was not confirmed.

READ FULL TEXT

page 6

page 8

research
10/30/2020

Perception Improvement for Free: Exploring Imperceptible Black-box Adversarial Attacks on Image Classification

Deep neural networks are vulnerable to adversarial attacks. White-box ad...
research
07/09/2018

Vulnerability Analysis of Chest X-Ray Image Classification Against Adversarial Attacks

Recently, there have been several successful deep learning approaches fo...
research
12/15/2018

Perturbation Analysis of Learning Algorithms: A Unifying Perspective on Generation of Adversarial Examples

Despite the tremendous success of deep neural networks in various learni...
research
04/24/2020

One Sparse Perturbation to Fool them All, almost Always!

Constructing adversarial perturbations for deep neural networks is an im...
research
04/05/2017

Comment on "Biologically inspired protection of deep networks from adversarial attacks"

A recent paper suggests that Deep Neural Networks can be protected from ...
research
04/01/2021

Normal vs. Adversarial: Salience-based Analysis of Adversarial Samples for Relation Extraction

Recent neural-based relation extraction approaches, though achieving pro...

Please sign up or login with your details

Forgot password? Click here to reset