Trace-Norm Adversarial Examples

07/02/2020
by   Ehsan Kazemi, et al.
0

White box adversarial perturbations are sought via iterative optimization algorithms most often minimizing an adversarial loss on a l_p neighborhood of the original image, the so-called distortion set. Constraining the adversarial search with different norms results in disparately structured adversarial examples. Here we explore several distortion sets with structure-enhancing algorithms. These new structures for adversarial examples, yet pervasive in optimization, are for instance a challenge for adversarial theoretical certification which again provides only l_p certificates. Because adversarial robustness is still an empirical field, defense mechanisms should also reasonably be evaluated against differently structured attacks. Besides, these structured adversarial perturbations may allow for larger distortions size than their l_p counter-part while remaining imperceptible or perceptible as natural slight distortions of the image. Finally, they allow some control on the generation of the adversarial perturbation, like (localized) bluriness.

READ FULL TEXT

page 22

page 23

page 25

page 26

page 27

page 28

page 29

page 30

research
02/15/2021

Generating Structured Adversarial Attacks Using Frank-Wolfe Method

White box adversarial perturbations are generated via iterative optimiza...
research
07/07/2020

Regional Image Perturbation Reduces L_p Norms of Adversarial Examples While Maintaining Model-to-model Transferability

Regional adversarial attacks often rely on complicated methods for gener...
research
08/05/2018

Structured Adversarial Attack: Towards General Implementation and Better Interpretability

When generating adversarial examples to attack deep neural networks (DNN...
research
12/12/2022

Robust Perception through Equivariance

Deep networks for computer vision are not reliable when they encounter a...
research
05/08/2020

Towards Robustness against Unsuspicious Adversarial Examples

Despite the remarkable success of deep neural networks, significant conc...
research
01/29/2020

Semantic Adversarial Perturbations using Learnt Representations

Adversarial examples for image classifiers are typically created by sear...
research
11/05/2020

Data Augmentation via Structured Adversarial Perturbations

Data augmentation is a major component of many machine learning methods ...

Please sign up or login with your details

Forgot password? Click here to reset