Measuring the tendency of CNNs to Learn Surface Statistical Regularities

11/30/2017
by   Jason Jo, et al.
0

Deep CNNs are known to exhibit the following peculiarity: on the one hand they generalize extremely well to a test set, while on the other hand they are extremely sensitive to so-called adversarial perturbations. The extreme sensitivity of high performance CNNs to adversarial examples casts serious doubt that these networks are learning high level abstractions in the dataset. We are concerned with the following question: How can a deep CNN that does not learn any high level semantics of the dataset manage to generalize so well? The goal of this article is to measure the tendency of CNNs to learn surface statistical regularities of the dataset. To this end, we use Fourier filtering to construct datasets which share the exact same high level abstractions but exhibit qualitatively different surface statistical regularities. For the SVHN and CIFAR-10 datasets, we present two Fourier filtered variants: a low frequency variant and a randomly filtered variant. Each of the Fourier filtering schemes is tuned to preserve the recognizability of the objects. Our main finding is that CNNs exhibit a tendency to latch onto the Fourier image statistics of the training dataset, sometimes exhibiting up to a 28 generalization gap across the various test sets. Moreover, we observe that significantly increasing the depth of a network has a very marginal impact on closing the aforementioned generalization gap. Thus we provide quantitative evidence supporting the hypothesis that deep CNNs tend to learn surface statistical regularities in the dataset rather than higher-level abstract concepts.

READ FULL TEXT

page 4

page 5

page 10

page 11

research
04/24/2018

Towards Dependable Deep Convolutional Neural Networks (CNNs) with Out-distribution Learning

Detection and rejection of adversarial examples in security sensitive an...
research
03/29/2021

On the Adversarial Robustness of Visual Transformers

Following the success in advancing natural language processing and under...
research
10/20/2021

Repaint: Improving the Generalization of Down-Stream Visual Tasks by Generating Multiple Instances of Training Examples

Convolutional Neural Networks (CNNs) for visual tasks are believed to le...
research
08/19/2019

Human uncertainty makes classification more robust

The classification performance of deep neural networks has begun to asym...
research
11/18/2018

DeepConsensus: using the consensus of features from multiple layers to attain robust image classification

We consider a classifier whose test set is exposed to various perturbati...
research
03/14/2012

Evolving Culture vs Local Minima

We propose a theory that relates difficulty of learning in deep architec...

Please sign up or login with your details

Forgot password? Click here to reset