GreedyFool: An Imperceptible Black-box Adversarial Example Attack against Neural Networks

10/14/2020
by   Hui Liu, et al.
11

Deep neural networks (DNNs) are inherently vulnerable to well-designed input samples called adversarial examples. The adversary can easily fool DNNs by adding slight perturbations to the input. In this paper, we propose a novel black-box adversarial example attack named GreedyFool, which synthesizes adversarial examples based on the differential evolution and the greedy approximation. The differential evolution is utilized to evaluate the effects of perturbed pixels on the confidence of the DNNs-based classifier. The greedy approximation is an approximate optimization algorithm to automatically get adversarial perturbations. Existing works synthesize the adversarial examples by leveraging simple metrics to penalize the perturbations, which lack sufficient consideration of the human visual system (HVS), resulting in noticeable artifacts. In order to sufficient imperceptibility, we launch a lot of investigations into the HVS and design an integrated metric considering just noticeable distortion (JND), Weber-Fechner law, texture masking and channel modulation, which is proven to be a better metric to measure the perceptual distance between the benign examples and the adversarial ones. The experimental results demonstrate that the GreedyFool has several remarkable properties including black-box, 100 synthesize the more imperceptible adversarial examples than the state-of-the-art pixel-wise methods.

READ FULL TEXT

page 1

page 4

page 5

page 7

research
01/08/2018

Generating adversarial examples with adversarial networks

Deep neural networks (DNNs) have been found to be vulnerable to adversar...
research
04/14/2018

On the Limitation of MagNet Defense against L_1-based Adversarial Examples

In recent years, defending adversarial perturbations to natural examples...
research
12/05/2019

Region-Wise Attack: On Efficient Generation of Robust Physical Adversarial Examples

Deep neural networks (DNNs) are shown to be susceptible to adversarial e...
research
05/07/2019

Representation of White- and Black-Box Adversarial Examples in Deep Neural Networks and Humans: A Functional Magnetic Resonance Imaging Study

The recent success of brain-inspired deep neural networks (DNNs) in solv...
research
11/15/2019

Learning To Characterize Adversarial Subspaces

Deep Neural Networks (DNNs) are known to be vulnerable to the maliciousl...
research
02/09/2019

When Causal Intervention Meets Adversarial Examples and Image Masking for Deep Neural Networks

Discovering and exploiting the causality in deep neural networks (DNNs) ...
research
04/19/2018

Attacking Convolutional Neural Network using Differential Evolution

The output of Convolutional Neural Networks (CNN) has been shown to be d...

Please sign up or login with your details

Forgot password? Click here to reset