Quantization Aware Attack: Enhancing the Transferability of Adversarial Attacks across Target Models with Different Quantization Bitwidths

05/10/2023
by   Yulong Yang, et al.
0

Quantized Neural Networks (QNNs) receive increasing attention in resource-constrained scenarios because of their excellent generalization abilities, but their robustness under realistic black-box adversarial attacks has not been deeply studied, in which the adversary requires to improve the attack capability across target models with unknown quantization bitwidths. One major challenge is that adversarial examples transfer poorly against QNNs with unknown bitwidths because of the quantization shift and gradient misalignment issues. This paper proposes the Quantization Aware Attack to enhance the attack transferability by making the substitute model “aware of” the target of attacking models with multiple bitwidths. Specifically, we design a training objective with multiple bitwidths to align the gradient of the substitute model with the target model with different bitwidths and thus mitigate the negative effect of the above two issues. We conduct comprehensive evaluations by performing multiple transfer-based attacks on standard models and defense models with different architectures and quantization bitwidths. Experimental results show that QAA significantly improves the adversarial transferability of the state-of-the-art attacks by 3.4 3.7

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/21/2023

Enhancing Adversarial Attacks: The Similar Target Method

Deep neural networks are vulnerable to adversarial examples, posing a th...
research
08/04/2020

TREND: Transferability based Robust ENsemble Design

Deep Learning models hold state-of-the-art performance in many fields, b...
research
09/27/2019

Impact of Low-bitwidth Quantization on the Adversarial Robustness for Embedded Neural Networks

As the will to deploy neural networks models on embedded systems grows, ...
research
07/09/2023

GNP Attack: Transferable Adversarial Examples via Gradient Norm Penalty

Adversarial examples (AE) with good transferability enable practical bla...
research
09/19/2020

EI-MTD:Moving Target Defense for Edge Intelligence against Adversarial Attacks

With the boom of edge intelligence, its vulnerability to adversarial att...
research
01/27/2021

Adversarial Stylometry in the Wild: Transferable Lexical Substitution Attacks on Author Profiling

Written language contains stylistic cues that can be exploited to automa...
research
08/16/2020

Attack on Multi-Node Attention for Object Detection

This paper focuses on high-transferable adversarial attacks on detection...

Please sign up or login with your details

Forgot password? Click here to reset