Intriguing properties of neural networks

12/21/2013
by   Christian Szegedy, et al.
0

Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. While their expressiveness is the reason they succeed, it also causes them to learn uninterpretable solutions that could have counter-intuitive properties. In this paper we report two such properties. First, we find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. Second, we find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extend. We can cause the network to misclassify an image by applying a certain imperceptible perturbation, which is found by maximizing the network's prediction error. In addition, the specific nature of these perturbations is not a random artifact of learning: the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input.

READ FULL TEXT

page 3

page 4

page 6

research
09/27/2020

Beneficial Perturbation Network for designing general adaptive artificial intelligence systems

The human brain is the gold standard of adaptive learning. It not only c...
research
12/07/2020

Statistical Mechanics of Deep Linear Neural Networks: The Back-Propagating Renormalization Group

The success of deep learning in many real-world tasks has triggered an e...
research
12/20/2013

Improving Deep Neural Networks with Probabilistic Maxout Units

We present a probabilistic variant of the recently introduced maxout uni...
research
06/03/2015

What value do explicit high level concepts have in vision to language problems?

Much of the recent progress in Vision-to-Language (V2L) problems has bee...
research
03/03/2020

Untangling in Invariant Speech Recognition

Encouraged by the success of deep neural networks on a variety of visual...
research
03/09/2020

Finding Input Characterizations for Output Properties in ReLU Neural Networks

Deep Neural Networks (DNNs) have emerged as a powerful mechanism and are...
research
04/27/2020

Interpretation of Deep Temporal Representations by Selective Visualization of Internally Activated Units

Recently deep neural networks demonstrate competitive performances in cl...

Please sign up or login with your details

Forgot password? Click here to reset