Improving Robustness of Adversarial Attacks Using an Affine-Invariant Gradient Estimator

by   Wenzhao Xiang, et al.

Adversarial examples can deceive a deep neural network (DNN) by significantly altering its response with imperceptible perturbations, which poses new potential vulnerabilities as the growing ubiquity of DNNs. However, most of the existing adversarial examples cannot maintain the malicious functionality if we apply an affine transformation on the resultant examples, which is an important measurement to the robustness of adversarial attacks for the practical risks. To address this issue, we propose an affine-invariant adversarial attack which can consistently construct adversarial examples robust over a distribution of affine transformation. To further improve the efficiency, we propose to disentangle the affine transformation into rotations, translations, magnifications, and reformulate the transformation in polar space. Afterwards, we construct an affine-invariant gradient estimator by convolving the gradient at the original image with derived kernels, which can be integrated with any gradient-based attack methods. Extensive experiments on the ImageNet demonstrate that our method can consistently produce more robust adversarial examples under significant affine transformations, and as a byproduct, improve the transferability of adversarial examples compared with the alternative state-of-the-art methods.


page 1

page 4

page 10

page 17


Nesterov Accelerated Gradient and Scale Invariance for Improving Transferability of Adversarial Examples

Recent evidence suggests that deep neural networks (DNNs) are vulnerable...

Sampling-based Fast Gradient Rescaling Method for Highly Transferable Adversarial Attacks

Deep neural networks have shown to be very vulnerable to adversarial exa...

Affine Disentangled GAN for Interpretable and Robust AV Perception

Autonomous vehicles (AV) have progressed rapidly with the advancements i...

Towards Defending against Adversarial Examples via Attack-Invariant Features

Deep neural networks (DNNs) are vulnerable to adversarial noise. Their a...

Synthesizing Robust Adversarial Examples

Neural network-based classifiers parallel or exceed human-level accuracy...

Metamorphic Detection of Adversarial Examples in Deep Learning Models With Affine Transformations

Adversarial attacks are small, carefully crafted perturbations, impercep...

Robust Synthesis of Adversarial Visual Examples Using a Deep Image Prior

We present a novel method for generating robust adversarial image exampl...