Perceptual Evaluation of Adversarial Attacks for CNN-based Image Classification

06/01/2019
by   Sid Ahmed Fezza, et al.
0

Deep neural networks (DNNs) have recently achieved state-of-the-art performance and provide significant progress in many machine learning tasks, such as image classification, speech processing, natural language processing, etc. However, recent studies have shown that DNNs are vulnerable to adversarial attacks. For instance, in the image classification domain, adding small imperceptible perturbations to the input image is sufficient to fool the DNN and to cause misclassification. The perturbed image, called adversarial example, should be visually as close as possible to the original image. However, all the works proposed in the literature for generating adversarial examples have used the L_p norms (L_0, L_2 and L_∞) as distance metrics to quantify the similarity between the original image and the adversarial example. Nonetheless, the L_p norms do not correlate with human judgment, making them not suitable to reliably assess the perceptual similarity/fidelity of adversarial examples. In this paper, we present a database for visual fidelity assessment of adversarial examples. We describe the creation of the database and evaluate the performance of fifteen state-of-the-art full-reference (FR) image fidelity assessment metrics that could substitute L_p norms. The database as well as subjective scores are publicly available to help designing new metrics for adversarial examples and to facilitate future research works.

READ FULL TEXT

page 1

page 3

research
07/09/2018

Adaptive Adversarial Attack on Scene Text Recognition

Recent studies have shown that state-of-the-art deep learning models are...
research
02/14/2021

Perceptually Constrained Adversarial Attacks

Motivated by previous observations that the usually applied L_p norms (p...
research
02/27/2018

On the Suitability of L_p-norms for Creating and Preventing Adversarial Examples

Much research effort has been devoted to better understanding adversaria...
research
09/08/2020

Adversarial Machine Learning in Image Classification: A Survey Towards the Defender's Perspective

Deep Learning algorithms have achieved the state-of-the-art performance ...
research
04/29/2022

Detecting Textual Adversarial Examples Based on Distributional Characteristics of Data Representations

Although deep neural networks have achieved state-of-the-art performance...
research
02/15/2021

Cross-modal Adversarial Reprogramming

With the abundance of large-scale deep learning models, it has become po...
research
09/17/2017

Mitigating Evasion Attacks to Deep Neural Networks via Region-based Classification

Deep neural networks (DNNs) have transformed several artificial intellig...

Please sign up or login with your details

Forgot password? Click here to reset