Analyzing the Noise Robustness of Deep Neural Networks

10/09/2018
by   Mengchen Liu, et al.
6

Deep neural networks (DNNs) are vulnerable to maliciously generated adversarial examples. These examples are intentionally designed by making imperceptible perturbations and often mislead a DNN into making an incorrect prediction. This phenomenon means that there is significant risk in applying DNNs to safety-critical applications, such as driverless cars. To address this issue, we present a visual analytics approach to explain the primary cause of the wrong predictions introduced by adversarial examples. The key is to analyze the datapaths of the adversarial examples and compare them with those of the normal examples. A datapath is a group of critical neurons and their connections. To this end, we formulate the datapath extraction as a subset selection problem and approximately solve it based on back-propagation. A multi-level visualization consisting of a segmented DAG (layer level), an Euler diagram (feature map level), and a heat map (neuron level), has been designed to help experts investigate datapaths from the high-level layers to the detailed neuron activations. Two case studies are conducted that demonstrate the promise of our approach in support of explaining the working mechanism of adversarial examples.

READ FULL TEXT

page 1

page 3

page 4

page 7

page 8

page 9

page 10

research
09/16/2019

Interpreting and Improving Adversarial Robustness with Neuron Sensitivity

Deep neural networks (DNNs) are vulnerable to adversarial examples where...
research
12/11/2014

Towards Deep Neural Network Architectures Robust to Adversarial Examples

Recent work has shown deep neural networks (DNNs) to be highly susceptib...
research
07/19/2021

Feature-Filter: Detecting Adversarial Examples through Filtering off Recessive Features

Deep neural networks (DNNs) are under threat from adversarial example at...
research
03/09/2023

NoiseCAM: Explainable AI for the Boundary Between Noise and Adversarial Attacks

Deep Learning (DL) and Deep Neural Networks (DNNs) are widely used in va...
research
11/19/2019

Defective Convolutional Layers Learn Robust CNNs

Robustness of convolutional neural networks has recently been highlighte...
research
01/21/2019

Generating Textual Adversarial Examples for Deep Learning Models: A Survey

With the development of high computational devices, deep neural networks...
research
05/22/2019

Detecting Adversarial Examples and Other Misclassifications in Neural Networks by Introspection

Despite having excellent performances for a wide variety of tasks, moder...

Please sign up or login with your details

Forgot password? Click here to reset