Simple black-box universal adversarial attacks on medical image classification based on deep neural networks

08/11/2021
by   Kazuki Koga, et al.
0

Universal adversarial attacks, which hinder most deep neural network (DNN) tasks using only a small single perturbation called a universal adversarial perturbation (UAP), is a realistic security threat to the practical application of a DNN. In particular, such attacks cause serious problems in medical imaging. Given that computer-based systems are generally operated under a black-box condition in which only queries on inputs are allowed and outputs are accessible, the impact of UAPs seems to be limited because well-used algorithms for generating UAPs are limited to a white-box condition in which adversaries can access the model weights and loss gradients. Nevertheless, we demonstrate that UAPs are easily generatable using a relatively small dataset under black-box conditions. In particular, we propose a method for generating UAPs using a simple hill-climbing search based only on DNN outputs and demonstrate the validity of the proposed method using representative DNN-based medical image classifications. Black-box UAPs can be used to conduct both non-targeted and targeted attacks. Overall, the black-box UAPs showed high attack success rates (40 because the method only utilizes limited information to generate UAPs. The vulnerability of black-box UAPs was observed in several model architectures. The results indicate that adversaries can also generate UAPs through a simple procedure under the black-box condition to foil or control DNN-based medical image diagnoses, and that UAPs are a more realistic security threat.

READ FULL TEXT

page 16

page 17

page 18

research
04/10/2019

Black-box Adversarial Attacks on Video Recognition Models

Deep neural networks (DNNs) are known for their vulnerability to adversa...
research
01/29/2021

You Only Query Once: Effective Black Box Adversarial Attacks with Minimal Repeated Queries

Researchers have repeatedly shown that it is possible to craft adversari...
research
09/30/2021

Mitigating Black-Box Adversarial Attacks via Output Noise Perturbation

In black-box adversarial attacks, adversaries query the deep neural netw...
research
02/18/2020

On the Matrix-Free Generation of Adversarial Perturbations for Black-Box Attacks

In general, adversarial perturbations superimposed on inputs are realist...
research
12/03/2020

An Empirical Study of Derivative-Free-Optimization Algorithms for Targeted Black-Box Attacks in Deep Neural Networks

We perform a comprehensive study on the performance of derivative free o...
research
03/31/2019

BlackMarks: Blackbox Multibit Watermarking for Deep Neural Networks

Deep Neural Networks have created a paradigm shift in our ability to com...
research
05/22/2020

Vulnerability of deep neural networks for detecting COVID-19 cases from chest X-ray images to universal adversarial attacks

Under the epidemic of the novel coronavirus disease 2019 (COVID-19), che...

Please sign up or login with your details

Forgot password? Click here to reset