Understanding and Enhancing the Transferability of Adversarial Examples

02/27/2018
by   Lei Wu, et al.
0

State-of-the-art deep neural networks are known to be vulnerable to adversarial examples, formed by applying small but malicious perturbations to the original inputs. Moreover, the perturbations can transfer across models: adversarial examples generated for a specific model will often mislead other unseen models. Consequently the adversary can leverage it to attack deployed systems without any query, which severely hinder the application of deep learning, especially in the areas where security is crucial. In this work, we systematically study how two classes of factors that might influence the transferability of adversarial examples. One is about model-specific factors, including network architecture, model capacity and test accuracy. The other is the local smoothness of loss function for constructing adversarial examples. Based on these understanding, a simple but effective strategy is proposed to enhance transferability. We call it variance-reduced attack, since it utilizes the variance-reduced gradient to generate adversarial example. The effectiveness is confirmed by a variety of experiments on both CIFAR-10 and ImageNet datasets.

READ FULL TEXT
research
07/02/2020

Generating Adversarial Examples withControllable Non-transferability

Adversarial attacks against Deep Neural Networks have been widely studie...
research
11/22/2018

Distorting Neural Representations to Generate Highly Transferable Adversarial Examples

Deep neural networks (DNN) can be easily fooled by adding human impercep...
research
09/07/2022

On the Transferability of Adversarial Examples between Encrypted Models

Deep neural networks (DNNs) are well known to be vulnerable to adversari...
research
07/01/2020

Query-Free Adversarial Transfer via Undertrained Surrogates

Deep neural networks have been shown to be highly vulnerable to adversar...
research
08/23/2022

Transferability Ranking of Adversarial Examples

Adversarial examples can be used to maliciously and covertly change a mo...
research
08/04/2020

TREND: Transferability based Robust ENsemble Design

Deep Learning models hold state-of-the-art performance in many fields, b...
research
12/04/2018

Adversarial Example Decomposition

Research has shown that widely used deep neural networks are vulnerable ...

Please sign up or login with your details

Forgot password? Click here to reset