Not So Robust After All: Evaluating the Robustness of Deep Neural Networks to Unseen Adversarial Attacks

08/12/2023
by   Roman Garaev, et al.
0

Deep neural networks (DNNs) have gained prominence in various applications, such as classification, recognition, and prediction, prompting increased scrutiny of their properties. A fundamental attribute of traditional DNNs is their vulnerability to modifications in input data, which has resulted in the investigation of adversarial attacks. These attacks manipulate the data in order to mislead a DNN. This study aims to challenge the efficacy and generalization of contemporary defense mechanisms against adversarial attacks. Specifically, we explore the hypothesis proposed by Ilyas et. al, which posits that DNN image features can be either robust or non-robust, with adversarial attacks targeting the latter. This hypothesis suggests that training a DNN on a dataset consisting solely of robust features should produce a model resistant to adversarial attacks. However, our experiments demonstrate that this is not universally true. To gain further insights into our findings, we analyze the impact of adversarial attack norms on DNN representations, focusing on samples subjected to L_2 and L_∞ norm attacks. Further, we employ canonical correlation analysis, visualize the representations, and calculate the mean distance between these representations and various DNN decision boundaries. Our results reveal a significant difference between L_2 and L_∞ norms, which could provide insights into the potential dangers posed by L_∞ norm attacks, previously underestimated by the research community.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/05/2020

Bluff: Interactively Deciphering Adversarial Attacks on Deep Neural Networks

Deep neural networks (DNNs) are now commonly used in many domains. Howev...
research
02/17/2021

Towards Adversarial-Resilient Deep Neural Networks for False Data Injection Attack Detection in Power Grids

False data injection attack (FDIA) is a critical security issue in power...
research
02/23/2019

A Deep, Information-theoretic Framework for Robust Biometric Recognition

Deep neural networks (DNN) have been a de facto standard for nowadays bi...
research
07/08/2020

On the relationship between class selectivity, dimensionality, and robustness

While the relative trade-offs between sparse and distributed representat...
research
12/07/2018

Deep-RBF Networks Revisited: Robust Classification with Rejection

One of the main drawbacks of deep neural networks, like many other class...
research
06/21/2020

Network Moments: Extensions and Sparse-Smooth Attacks

The impressive performance of deep neural networks (DNNs) has immensely ...
research
03/12/2021

Game-theoretic Understanding of Adversarially Learned Features

This paper aims to understand adversarial attacks and defense from a new...

Please sign up or login with your details

Forgot password? Click here to reset