CAG: A Real-time Low-cost Enhanced-robustness High-transferability Content-aware Adversarial Attack Generator

12/16/2019
by   Huy Phan, et al.
11

Deep neural networks (DNNs) are vulnerable to adversarial attack despite their tremendous success in many AI fields. Adversarial attack is a method that causes the intended misclassfication by adding imperceptible perturbations to legitimate inputs. Researchers have developed numerous types of adversarial attack methods. However, from the perspective of practical deployment, these methods suffer from several drawbacks such as long attack generating time, high memory cost, insufficient robustness and low transferability. We propose a Content-aware Adversarial Attack Generator (CAG) to achieve real-time, low-cost, enhanced-robustness and high-transferability adversarial attack. First, as a type of generative model-based attack, CAG shows significant speedup (at least 500 times) in generating adversarial examples compared to the state-of-the-art attacks such as PGD and C&W. CAG only needs a single generative model to perform targeted attack to any targeted class. Because CAG encodes the label information into a trainable embedding layer, it differs from prior generative model-based adversarial attacks that use n different copies of generative models for n different targeted classes. As a result, CAG significantly reduces the required memory cost for generating adversarial examples. CAG can generate adversarial perturbations that focus on the critical areas of input by integrating the class activation maps information in the training process, and hence improve the robustness of CAG attack against the state-of-art adversarial defenses. In addition, CAG exhibits high transferability across different DNN classifier models in black-box attack scenario by introducing random dropout in the process of generating perturbations. Extensive experiments on different datasets and DNN models have verified the real-time, low-cost, enhanced-robustness, and high-transferability benefits of CAG.

READ FULL TEXT

page 1

page 5

research
04/23/2023

StyLess: Boosting the Transferability of Adversarial Examples

Adversarial attacks can mislead deep neural networks (DNNs) by adding im...
research
07/05/2021

Boosting Transferability of Targeted Adversarial Examples via Hierarchical Generative Networks

Transfer-based adversarial attacks can effectively evaluate model robust...
research
06/14/2023

Reliable Evaluation of Adversarial Transferability

Adversarial examples (AEs) with small adversarial perturbations can misl...
research
08/17/2023

Dynamic Neural Network is All You Need: Understanding the Robustness of Dynamic Mechanisms in Neural Networks

Deep Neural Networks (DNNs) have been used to solve different day-to-day...
research
11/19/2021

Towards Efficiently Evaluating the Robustness of Deep Neural Networks in IoT Systems: A GAN-based Method

Intelligent Internet of Things (IoT) systems based on deep neural networ...
research
04/27/2023

ChatGPT as an Attack Tool: Stealthy Textual Backdoor Attack via Blackbox Generative Model Trigger

Textual backdoor attacks pose a practical threat to existing systems, as...
research
12/21/2020

On Success and Simplicity: A Second Look at Transferable Targeted Attacks

There is broad consensus among researchers studying adversarial examples...

Please sign up or login with your details

Forgot password? Click here to reset