On the Matrix-Free Generation of Adversarial Perturbations for Black-Box Attacks

02/18/2020
by   Hisaichi Shibata, et al.
7

In general, adversarial perturbations superimposed on inputs are realistic threats for a deep neural network (DNN). In this paper, we propose a practical generation method of such adversarial perturbation to be applied to black-box attacks that demand access to an input-output relationship only. Thus, the attackers generate such perturbation without invoking inner functions and/or accessing the inner states of a DNN. Unlike the earlier studies, the algorithm to generate the perturbation presented in this study requires much fewer query trials. Moreover, to show the effectiveness of the adversarial perturbation extracted, we experiment with a DNN for semantic segmentation. The result shows that the network is easily deceived with the perturbation generated than using uniformly distributed random noise with the same magnitude.

READ FULL TEXT

page 1

page 2

page 3

page 4

page 5

research
03/03/2020

Data-Free Adversarial Perturbations for Practical Black-Box Attack

Neural networks are vulnerable to adversarial examples, which are malici...
research
08/11/2021

Simple black-box universal adversarial attacks on medical image classification based on deep neural networks

Universal adversarial attacks, which hinder most deep neural network (DN...
research
08/16/2021

Exploring Transferable and Robust Adversarial Perturbation Generation from the Perspective of Network Hierarchy

The transferability and robustness of adversarial examples are two pract...
research
08/19/2022

Dispersed Pixel Perturbation-based Imperceptible Backdoor Trigger for Image Classifier Models

Typical deep neural network (DNN) backdoor attacks are based on triggers...
research
09/30/2021

Mitigating Black-Box Adversarial Attacks via Output Noise Perturbation

In black-box adversarial attacks, adversaries query the deep neural netw...
research
02/24/2020

A Model-Based Derivative-Free Approach to Black-Box Adversarial Examples: BOBYQA

We demonstrate that model-based derivative free optimisation algorithms ...
research
12/03/2020

An Empirical Study of Derivative-Free-Optimization Algorithms for Targeted Black-Box Attacks in Deep Neural Networks

We perform a comprehensive study on the performance of derivative free o...

Please sign up or login with your details

Forgot password? Click here to reset