Adversarial Attack via Dual-Stage Network Erosion

01/01/2022
by   Yexin Duan, et al.
0

Deep neural networks are vulnerable to adversarial examples, which can fool deep models by adding subtle perturbations. Although existing attacks have achieved promising results, it still leaves a long way to go for generating transferable adversarial examples under the black-box setting. To this end, this paper proposes to improve the transferability of adversarial examples, and applies dual-stage feature-level perturbations to an existing model to implicitly create a set of diverse models. Then these models are fused by the longitudinal ensemble during the iterations. The proposed method is termed Dual-Stage Network Erosion (DSNE). We conduct comprehensive experiments both on non-residual and residual networks, and obtain more transferable adversarial examples with the computational cost similar to the state-of-the-art method. In particular, for the residual networks, the transferability of the adversarial examples can be significantly improved by biasing the residual block information to the skip connections. Our work provides new insights into the architectural vulnerability of neural networks and presents new challenges to the robustness of neural networks.

READ FULL TEXT
research
02/14/2020

Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets

Skip connections are an essential component of current state-of-the-art ...
research
12/09/2018

Learning Transferable Adversarial Examples via Ghost Networks

The recent development of adversarial attack has proven that ensemble-ba...
research
01/30/2023

Improving Adversarial Transferability with Scheduled Step Size and Dual Example

Deep neural networks are widely known to be vulnerable to adversarial ex...
research
08/27/2022

SA: Sliding attack for synthetic speech detection with resistance to clipping and self-splicing

Deep neural networks are vulnerable to adversarial examples that mislead...
research
07/26/2023

Enhanced Security against Adversarial Examples Using a Random Ensemble of Encrypted Vision Transformer Models

Deep neural networks (DNNs) are well known to be vulnerable to adversari...
research
10/15/2020

A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack and Learning

Although deep convolutional neural networks (CNNs) have demonstrated rem...
research
04/30/2019

Detecting Adversarial Examples through Nonlinear Dimensionality Reduction

Deep neural networks are vulnerable to adversarial examples, i.e., caref...

Please sign up or login with your details

Forgot password? Click here to reset