Identifying Layers Susceptible to Adversarial Attacks

07/10/2021
by   Shoaib Ahmed Siddiqui, et al.
7

Common neural network architectures are susceptible to attack by adversarial samples. Neural network architectures are commonly thought of as divided into low-level feature extraction layers and high-level classification layers; susceptibility of networks to adversarial samples is often thought of as a problem related to classification rather than feature extraction. We test this idea by selectively retraining different portions of VGG and ResNet architectures on CIFAR-10, Imagenette and ImageNet using non-adversarial and adversarial data. Our experimental results show that susceptibility to adversarial samples is associated with low-level feature extraction layers. Therefore, retraining high-level layers is insufficient for achieving robustness. This phenomenon could have two explanations: either, adversarial attacks yield outputs from early layers that are indistinguishable from features found in the attack classes, or adversarial attacks yield outputs from early layers that differ statistically from features for non-adversarial samples and do not permit consistent classification by subsequent layers. We test this question by large-scale non-linear dimensionality reduction and density modeling on distributions of feature vectors in hidden layers and find that the feature distributions between non-adversarial and adversarial samples differ substantially. Our results provide new insights into the statistical origins of adversarial samples and possible defenses.

READ FULL TEXT
research
10/12/2020

From Hero to Zéroe: A Benchmark of Low-Level Adversarial Attacks

Adversarial attacks are label-preserving modifications to inputs of mach...
research
12/08/2019

Exploring the Back Alleys: Analysing The Robustness of Alternative Neural Network Architectures against Adversarial Attacks

Recent discoveries in the field of adversarial machine learning have sho...
research
12/09/2018

Feature Denoising for Improving Adversarial Robustness

Adversarial attacks to image classification systems present challenges t...
research
02/07/2019

Robustness Of Saak Transform Against Adversarial Attacks

Image classification is vulnerable to adversarial attacks. This work inv...
research
12/23/2019

A Robust and Precise ConvNet for small non-coding RNA classification (RPC-snRC)

Functional or non-coding RNAs are attracting more attention as they are ...
research
11/30/2017

Convolutional Networks with Adaptive Computation Graphs

Do convolutional networks really need a fixed feed-forward structure? Of...
research
09/25/2019

Probabilistic Modeling of Deep Features for Out-of-Distribution and Adversarial Detection

We present a principled approach for detecting out-of-distribution (OOD)...

Please sign up or login with your details

Forgot password? Click here to reset