AdvFoolGen: Creating Persistent Troubles for Deep Classifiers

07/20/2020
by   Yuzhen Ding, et al.
0

Researches have shown that deep neural networks are vulnerable to malicious attacks, where adversarial images are created to trick a network into misclassification even if the images may give rise to totally different labels by human eyes. To make deep networks more robust to such attacks, many defense mechanisms have been proposed in the literature, some of which are quite effective for guarding against typical attacks. In this paper, we present a new black-box attack termed AdvFoolGen, which can generate attacking images from the same feature space as that of the natural images, so as to keep baffling the network even though state-of-the-art defense mechanisms have been applied. We systematically evaluate our model by comparing with well-established attack algorithms. Through experiments, we demonstrate the effectiveness and robustness of our attack in the face of state-of-the-art defense techniques and unveil the potential reasons for its effectiveness through principled analysis. As such, AdvFoolGen contributes to understanding the vulnerability of deep networks from a new perspective and may, in turn, help in developing and evaluating new defense mechanisms.

READ FULL TEXT

page 4

page 6

research
12/21/2020

Deep Feature Space Trojan Attack of Neural Networks by Controlled Detoxification

Trojan (backdoor) attack is a form of adversarial attack on deep neural ...
research
11/12/2020

Adversarial Robustness Against Image Color Transformation within Parametric Filter Space

We propose Adversarial Color Enhancement (ACE), a novel approach to gene...
research
10/31/2020

Evaluation of Inference Attack Models for Deep Learning on Medical Data

Deep learning has attracted broad interest in healthcare and medical com...
research
12/12/2022

Carpet-bombing patch: attacking a deep network without usual requirements

Although deep networks have shown vulnerability to evasion attacks, such...
research
05/28/2019

ME-Net: Towards Effective Adversarial Robustness with Matrix Estimation

Deep neural networks are vulnerable to adversarial attacks. The literatu...
research
11/08/2022

How Fraudster Detection Contributes to Robust Recommendation

The adversarial robustness of recommendation systems under node injectio...
research
04/05/2021

Unified Detection of Digital and Physical Face Attacks

State-of-the-art defense mechanisms against face attacks achieve near pe...

Please sign up or login with your details

Forgot password? Click here to reset