Improving Adversarial Transferability via Neuron Attribution-Based Attacks

03/31/2022
by   Jianping Zhang, et al.
0

Deep neural networks (DNNs) are known to be vulnerable to adversarial examples. It is thus imperative to devise effective attack algorithms to identify the deficiencies of DNNs beforehand in security-sensitive applications. To efficiently tackle the black-box setting where the target model's particulars are unknown, feature-level transfer-based attacks propose to contaminate the intermediate feature outputs of local models, and then directly employ the crafted adversarial samples to attack the target model. Due to the transferability of features, feature-level attacks have shown promise in synthesizing more transferable adversarial samples. However, existing feature-level attacks generally employ inaccurate neuron importance estimations, which deteriorates their transferability. To overcome such pitfalls, in this paper, we propose the Neuron Attribution-based Attack (NAA), which conducts feature-level attacks with more accurate neuron importance estimations. Specifically, we first completely attribute a model's output to each neuron in a middle layer. We then derive an approximation scheme of neuron attribution to tremendously reduce the computation overhead. Finally, we weight neurons based on their attribution results and launch feature-level attacks. Extensive experiments confirm the superiority of our approach to the state-of-the-art benchmarks.

READ FULL TEXT
research
07/23/2019

Enhancing Adversarial Example Transferability with an Intermediate Level Attack

Neural networks are vulnerable to adversarial examples, malicious inputs...
research
05/02/2023

Boosting Adversarial Transferability via Fusing Logits of Top-1 Decomposed Feature

Recent research has shown that Deep Neural Networks (DNNs) are highly vu...
research
11/25/2019

Explaining Neural Networks via Perturbing Important Learned Features

Attributing the output of a neural network to the contribution of given ...
research
03/28/2023

Improving the Transferability of Adversarial Samples by Path-Augmented Method

Deep neural networks have achieved unprecedented success on diverse visi...
research
05/03/2023

Defending against Insertion-based Textual Backdoor Attacks via Attribution

Textual backdoor attack, as a novel attack model, has been shown to be e...
research
03/28/2023

Transferable Adversarial Attacks on Vision Transformers with Token Gradient Regularization

Vision transformers (ViTs) have been successfully deployed in a variety ...
research
03/01/2023

A Practical Upper Bound for the Worst-Case Attribution Deviations

Model attribution is a critical component of deep neural networks (DNNs)...

Please sign up or login with your details

Forgot password? Click here to reset