Improving Adversarial Transferability via Intermediate-level Perturbation Decay

04/26/2023
by   Qizhang Li, et al.
0

Intermediate-level attacks that attempt to perturb feature representations following an adversarial direction drastically have shown favorable performance in crafting transferable adversarial examples. Existing methods in this category are normally formulated with two separate stages, where a directional guide is required to be determined at first and the scalar projection of the intermediate-level perturbation onto the directional guide is enlarged thereafter. The obtained perturbation deviates from the guide inevitably in the feature space, and it is revealed in this paper that such a deviation may lead to sub-optimal attack. To address this issue, we develop a novel intermediate-level method that crafts adversarial examples within a single stage of optimization. In particular, the proposed method, named intermediate-level perturbation decay (ILPD), encourages the intermediate-level perturbation to be in an effective adversarial direction and to possess a great magnitude simultaneously. In-depth discussion verifies the effectiveness of our method. Experimental results show that it outperforms state-of-the-arts by large margins in attacking various victim models on ImageNet (+10.07 average) and CIFAR-10 (+3.88 https://github.com/qizhangli/ILPD-attack.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/20/2020

Yet Another Intermediate-Level Attack

The transferability of adversarial examples across deep neural network (...
research
07/23/2019

Enhancing Adversarial Example Transferability with an Intermediate Level Attack

Neural networks are vulnerable to adversarial examples, malicious inputs...
research
03/27/2023

Improving the Transferability of Adversarial Examples via Direction Tuning

In the transfer-based adversarial attacks, adversarial examples are only...
research
03/21/2022

An Intermediate-level Attack Framework on The Basis of Linear Regression

This paper substantially extends our work published at ECCV, in which an...
research
08/16/2021

Exploring Transferable and Robust Adversarial Perturbation Generation from the Perspective of Network Hierarchy

The transferability and robustness of adversarial examples are two pract...
research
10/14/2021

Adversarial examples by perturbing high-level features in intermediate decoder layers

We propose a novel method for creating adversarial examples. Instead of ...
research
02/24/2022

Improving Robustness of Convolutional Neural Networks Using Element-Wise Activation Scaling

Recent works reveal that re-calibrating the intermediate activation of a...

Please sign up or login with your details

Forgot password? Click here to reset