Sensitivity Analysis of Deep Neural Networks

01/22/2019
by   Hai Shu, et al.
0

Deep neural networks (DNNs) have achieved superior performance in various prediction tasks, but can be very vulnerable to adversarial examples or perturbations. Therefore, it is crucial to measure the sensitivity of DNNs to various forms of perturbations in real applications. We introduce a novel perturbation manifold and its associated influence measure to quantify the effects of various perturbations on DNN classifiers. Such perturbations include various external and internal perturbations to input samples and network parameters. The proposed measure is motivated by information geometry and provides desirable invariance properties. We demonstrate that our influence measure is useful for four model building tasks: detecting potential 'outliers', analyzing the sensitivity of model architectures, comparing network sensitivity between training and test sets, and locating vulnerable areas. Experiments show reasonably good performance of the proposed measure for the popular DNN models ResNet50 and DenseNet121 on CIFAR10 and MNIST datasets.

READ FULL TEXT
research
02/16/2020

Blind Adversarial Network Perturbations

Deep Neural Networks (DNNs) are commonly used for various traffic analys...
research
07/24/2023

An Estimator for the Sensitivity to Perturbations of Deep Neural Networks

For Deep Neural Networks (DNNs) to become useful in safety-critical appl...
research
10/26/2020

Examining the causal structures of deep neural networks using information theory

Deep Neural Networks (DNNs) are often examined at the level of their res...
research
11/24/2015

The Limitations of Deep Learning in Adversarial Settings

Deep learning takes advantage of large datasets and computationally effi...
research
02/12/2018

Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks

High sensitivity of neural networks against malicious perturbations on i...
research
03/01/2017

Detecting Adversarial Samples from Artifacts

Deep neural networks (DNNs) are powerful nonlinear architectures that ar...
research
08/08/2022

Robust and Imperceptible Black-box DNN Watermarking Based on Fourier Perturbation Analysis and Frequency Sensitivity Clustering

Recently, more and more attention has been focused on the intellectual p...

Please sign up or login with your details

Forgot password? Click here to reset