On the Efficacy of Metrics to Describe Adversarial Attacks

01/30/2023
by   Tommaso Puccetti, et al.
0

Adversarial defenses are naturally evaluated on their ability to tolerate adversarial attacks. To test defenses, diverse adversarial attacks are crafted, that are usually described in terms of their evading capability and the L0, L1, L2, and Linf norms. We question if the evading capability and L-norms are the most effective information to claim that defenses have been tested against a representative attack set. To this extent, we select image quality metrics from the state of the art and search correlations between image perturbation and detectability. We observe that computing L-norms alone is rarely the preferable solution. We observe a strong correlation between the identified metrics computed on an adversarial image and the output of a detector on such an image, to the extent that they can predict the response of a detector with approximately 0.94 accuracy. Further, we observe that metrics can classify attacks based on similar perturbations and similar detectability. This suggests a possible review of the approach to evaluate detectors, where additional metrics are included to assure that a representative attack dataset is selected.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/23/2023

The Best Defense is a Good Offense: Adversarial Augmentation against Adversarial Attacks

Many defenses against adversarial attacks (robust classifiers, randomiza...
research
05/31/2022

Hide and Seek: on the Stealthiness of Attacks against Deep Learning Systems

With the growing popularity of artificial intelligence and machine learn...
research
07/07/2020

Regional Image Perturbation Reduces L_p Norms of Adversarial Examples While Maintaining Model-to-model Transferability

Regional adversarial attacks often rely on complicated methods for gener...
research
12/05/2022

Multiple Perturbation Attack: Attack Pixelwise Under Different ℓ_p-norms For Better Adversarial Performance

Adversarial machine learning has been both a major concern and a hot top...
research
02/14/2021

Perceptually Constrained Adversarial Attacks

Motivated by previous observations that the usually applied L_p norms (p...
research
05/06/2020

GraCIAS: Grassmannian of Corrupted Images for Adversarial Security

Input transformation based defense strategies fall short in defending ag...
research
05/24/2023

Fast Adversarial CNN-based Perturbation Attack on No-Reference Image- and Video-Quality Metrics

Modern neural-network-based no-reference image- and video-quality metric...

Please sign up or login with your details

Forgot password? Click here to reset