Boosting Adversarial Transferability via Fusing Logits of Top-1 Decomposed Feature

05/02/2023
by   Juanjuan Weng, et al.
0

Recent research has shown that Deep Neural Networks (DNNs) are highly vulnerable to adversarial samples, which are highly transferable and can be used to attack other unknown black-box models. To improve the transferability of adversarial samples, several feature-based adversarial attack methods have been proposed to disrupt neuron activation in the middle layers. However, current state-of-the-art feature-based attack methods typically require additional computation costs for estimating the importance of neurons. To address this challenge, we propose a Singular Value Decomposition (SVD)-based feature-level attack method. Our approach is inspired by the discovery that eigenvectors associated with the larger singular values decomposed from the middle layer features exhibit superior generalization and attention properties. Specifically, we conduct the attack by retaining the decomposed Top-1 singular value-associated feature for computing the output logits, which are then combined with the original logits to optimize adversarial examples. Our extensive experimental results verify the effectiveness of our proposed method, which can be easily integrated into various baselines to significantly enhance the transferability of adversarial samples for disturbing normally trained CNNs and advanced defense strategies. The source code of this study is available at \textcolor{blue}{\href{https://anonymous.4open.science/r/SVD-SSA-13BF/README.md}{Link}}.

READ FULL TEXT

page 1

page 3

research
03/31/2022

Improving Adversarial Transferability via Neuron Attribution-Based Attacks

Deep neural networks (DNNs) are known to be vulnerable to adversarial ex...
research
12/07/2020

A Singular Value Perspective on Model Robustness

Convolutional Neural Networks (CNNs) have made significant progress on s...
research
04/26/2022

Boosting Adversarial Transferability of MLP-Mixer

The security of models based on new architectures such as MLP-Mixer and ...
research
02/10/2023

Making Substitute Models More Bayesian Can Enhance Transferability of Adversarial Examples

The transferability of adversarial examples across deep neural networks ...
research
07/14/2020

Patch-wise Attack for Fooling Deep Neural Network

By adding human-imperceptible noise to clean images, the resultant adver...
research
11/04/2021

Continuous Encryption Functions for Security Over Networks

This paper presents a study of continuous encryption functions (CEFs) of...
research
05/19/2022

Transferable Physical Attack against Object Detection with Separable Attention

Transferable adversarial attack is always in the spotlight since deep le...

Please sign up or login with your details

Forgot password? Click here to reset