Improving Robustness of Adversarial Attacks Using an Affine-Invariant Gradient Estimator

09/13/2021
by   Wenzhao Xiang, et al.
11

Adversarial examples can deceive a deep neural network (DNN) by significantly altering its response with imperceptible perturbations, which poses new potential vulnerabilities as the growing ubiquity of DNNs. However, most of the existing adversarial examples cannot maintain the malicious functionality if we apply an affine transformation on the resultant examples, which is an important measurement to the robustness of adversarial attacks for the practical risks. To address this issue, we propose an affine-invariant adversarial attack which can consistently construct adversarial examples robust over a distribution of affine transformation. To further improve the efficiency, we propose to disentangle the affine transformation into rotations, translations, magnifications, and reformulate the transformation in polar space. Afterwards, we construct an affine-invariant gradient estimator by convolving the gradient at the original image with derived kernels, which can be integrated with any gradient-based attack methods. Extensive experiments on the ImageNet demonstrate that our method can consistently produce more robust adversarial examples under significant affine transformations, and as a byproduct, improve the transferability of adversarial examples compared with the alternative state-of-the-art methods.

READ FULL TEXT

page 1

page 4

page 10

page 17

08/17/2019

Nesterov Accelerated Gradient and Scale Invariance for Improving Transferability of Adversarial Examples

Recent evidence suggests that deep neural networks (DNNs) are vulnerable...
04/06/2022

Sampling-based Fast Gradient Rescaling Method for Highly Transferable Adversarial Attacks

Deep neural networks have shown to be very vulnerable to adversarial exa...
07/06/2019

Affine Disentangled GAN for Interpretable and Robust AV Perception

Autonomous vehicles (AV) have progressed rapidly with the advancements i...
06/09/2021

Towards Defending against Adversarial Examples via Attack-Invariant Features

Deep neural networks (DNNs) are vulnerable to adversarial noise. Their a...
07/24/2017

Synthesizing Robust Adversarial Examples

Neural network-based classifiers parallel or exceed human-level accuracy...
07/10/2019

Metamorphic Detection of Adversarial Examples in Deep Learning Models With Affine Transformations

Adversarial attacks are small, carefully crafted perturbations, impercep...
07/03/2019

Robust Synthesis of Adversarial Visual Examples Using a Deep Image Prior

We present a novel method for generating robust adversarial image exampl...