EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples

09/13/2017
by   Pin-Yu Chen, et al.
0

Recent studies have highlighted the vulnerability of deep neural networks (DNNs) to adversarial examples - a visually indistinguishable adversarial image can easily be crafted to cause a well-trained model to misclassify. Existing methods for crafting adversarial examples are based on L_2 and L_∞ distortion metrics. However, despite the fact that L_1 distortion accounts for the total variation and encourages sparsity in the perturbation, little has been developed for crafting L_1-based adversarial examples. In this paper, we formulate the process of attacking DNNs via adversarial examples as an elastic-net regularized optimization problem. Our elastic-net attacks to DNNs (EAD) feature L_1-oriented adversarial examples and include the state-of-the-art L_2 attack as a special case. Experimental results on MNIST, CIFAR10 and ImageNet show that EAD can yield a distinct set of adversarial examples with small L_1 distortion and attains similar attack performance to the state-of-the-art methods in different attack scenarios. More importantly, EAD leads to improved attack transferability and complements adversarial training for DNNs, suggesting novel insights on leveraging L_1 distortion in adversarial machine learning and security implications of DNNs.

READ FULL TEXT

page 6

page 13

page 14

page 15

research
10/30/2017

Attacking the Madry Defense Model with L_1-based Adversarial Examples

The Madry Lab recently hosted a competition designed to test the robustn...
research
08/05/2018

Is Robustness the Cost of Accuracy? -- A Comprehensive Study on the Robustness of 18 Deep Image Classification Models

The prediction accuracy has been the long-lasting and sole standard for ...
research
05/24/2016

Measuring Neural Net Robustness with Constraints

Despite having high accuracy, neural nets have been shown to be suscepti...
research
08/05/2018

Structured Adversarial Attack: Towards General Implementation and Better Interpretability

When generating adversarial examples to attack deep neural networks (DNN...
research
09/13/2017

A Learning and Masking Approach to Secure Learning

Deep Neural Networks (DNNs) have been shown to be vulnerable against adv...
research
02/10/2020

ABBA: Saliency-Regularized Motion-Based Adversarial Blur Attack

Deep neural networks are vulnerable to noise-based adversarial examples,...
research
02/14/2020

Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets

Skip connections are an essential component of current state-of-the-art ...

Please sign up or login with your details

Forgot password? Click here to reset