Generating Structured Adversarial Attacks Using Frank-Wolfe Method

02/15/2021
by   Ehsan Kazemi, et al.
0

White box adversarial perturbations are generated via iterative optimization algorithms most often by minimizing an adversarial loss on a ℓ_p neighborhood of the original image, the so-called distortion set. Constraining the adversarial search with different norms results in disparately structured adversarial examples. Here we explore several distortion sets with structure-enhancing algorithms. These new structures for adversarial examples might provide challenges for provable and empirical robust mechanisms. Because adversarial robustness is still an empirical field, defense mechanisms should also reasonably be evaluated against differently structured attacks. Besides, these structured adversarial perturbations may allow for larger distortions size than their ℓ_p counter-part while remaining imperceptible or perceptible as natural distortions of the image. We will demonstrate in this work that the proposed structured adversarial examples can significantly bring down the classification accuracy of adversarialy trained classifiers while showing low ℓ_2 distortion rate. For instance, on ImagNet dataset the structured attacks drop the accuracy of adversarial model to near zero with only 50% of ℓ_2 distortion generated using white-box attacks like PGD. As a byproduct, our finding on structured adversarial examples can be used for adversarial regularization of models to make models more robust or improve their generalization performance on datasets which are structurally different.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 7

page 8

07/02/2020

Trace-Norm Adversarial Examples

White box adversarial perturbations are sought via iterative optimizatio...
04/24/2020

RAIN: Robust and Accurate Classification Networks with Randomization and Enhancement

Along with the extensive applications of CNN models for classification, ...
10/30/2017

Attacking the Madry Defense Model with L_1-based Adversarial Examples

The Madry Lab recently hosted a competition designed to test the robustn...
02/21/2019

Quantifying Perceptual Distortion of Adversarial Examples

Recent work has shown that additive threat models, which only permit the...
12/04/2019

Walking on the Edge: Fast, Low-Distortion Adversarial Examples

Adversarial examples of deep neural networks are receiving ever increasi...
07/07/2020

Regional Image Perturbation Reduces L_p Norms of Adversarial Examples While Maintaining Model-to-model Transferability

Regional adversarial attacks often rely on complicated methods for gener...
06/27/2018

Gradient Similarity: An Explainable Approach to Detect Adversarial Attacks against Deep Learning

Deep neural networks are susceptible to small-but-specific adversarial p...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.