Defense-friendly Images in Adversarial Attacks: Dataset and Metrics for Perturbation Difficulty

11/05/2020
by   Camilo Pestana, et al.
0

Dataset bias is a problem in adversarial machine learning, especially in the evaluation of defenses. An adversarial attack or defense algorithm may show better results on the reported dataset than can be replicated on other datasets. Even when two algorithms are compared, their relative performance can vary depending on the dataset. Deep learning offers state-of-the-art solutions for image recognition, but deep models are vulnerable even to small perturbations. Research in this area focuses primarily on adversarial attacks and defense algorithms. In this paper, we report for the first time, a class of robust images that are both resilient to attacks and that recover better than random images under adversarial attacks using simple defense techniques. Thus, a test dataset with a high proportion of robust images gives a misleading impression about the performance of an adversarial attack or defense. We propose three metrics to determine the proportion of robust images in a dataset and provide scoring to determine the dataset bias. We also provide an ImageNet-R dataset of 15000+ robust images to facilitate further research on this intriguing phenomenon of image strength under attack. Our dataset, combined with the proposed metrics, is valuable for unbiased benchmarking of adversarial attack and defense algorithms.

READ FULL TEXT
research
01/09/2023

On the Susceptibility and Robustness of Time Series Models through Adversarial Attack and Defense

Under adversarial attacks, time series regression and classification are...
research
10/25/2020

Attack Agnostic Adversarial Defense via Visual Imperceptible Bound

The high susceptibility of deep learning algorithms against structured a...
research
09/30/2019

Hidden Trigger Backdoor Attacks

With the success of deep learning algorithms in various domains, studyin...
research
06/21/2023

Adversarial Attacks Neutralization via Data Set Randomization

Adversarial attacks on deep-learning models pose a serious threat to the...
research
06/13/2019

A Computationally Efficient Method for Defending Adversarial Deep Learning Attacks

The reliance on deep learning algorithms has grown significantly in rece...
research
02/21/2020

UnMask: Adversarial Detection and Defense Through Robust Feature Alignment

Deep learning models are being integrated into a wide range of high-impa...
research
03/09/2022

Reverse Engineering ℓ_p attacks: A block-sparse optimization approach with recovery guarantees

Deep neural network-based classifiers have been shown to be vulnerable t...

Please sign up or login with your details

Forgot password? Click here to reset