Detecting Learning vs Memorization in Deep Neural Networks using Shared Structure Validation Sets

02/21/2018
by   Elias Chaibub Neto, et al.
0

The roles played by learning and memorization represent an important topic in deep learning research. Recent work on this subject has shown that the optimization behavior of DNNs trained on shuffled labels is qualitatively different from DNNs trained with real labels. Here, we propose a novel permutation approach that can differentiate memorization from learning in deep neural networks (DNNs) trained as usual (i.e., using the real labels to guide the learning, rather than shuffled labels). The evaluation of weather the DNN has learned and/or memorized, happens in a separate step where we compare the predictive performance of a shallow classifier trained with the features learned by the DNN, against multiple instances of the same classifier, trained on the same input, but using shuffled labels as outputs. By evaluating these shallow classifiers in validation sets that share structure with the training set, we are able to tell apart learning from memorization. Application of our permutation approach to multi-layer perceptrons and convolutional neural networks trained on image data corroborated many findings from other groups. Most importantly, our illustrations also uncovered interesting dynamic patterns about how DNNs memorize over increasing numbers of training epochs, and support the surprising result that DNNs are still able to learn, rather than only memorize, when trained with pure Gaussian noise as input.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/21/2019

Neural Network Memorization Dissection

Deep neural networks (DNNs) can easily fit a random labeling of the trai...
research
08/31/2019

Learning with Noisy Labels for Sentence-level Sentiment Classification

Deep neural networks (DNNs) can fit (or even over-fit) the training data...
research
06/06/2021

Topological Measurement of Deep Neural Networks Using Persistent Homology

The inner representation of deep neural networks (DNNs) is indecipherabl...
research
10/06/2021

Exploring the Common Principal Subspace of Deep Features in Neural Networks

We find that different Deep Neural Networks (DNNs) trained with the same...
research
06/18/2020

What Do Neural Networks Learn When Trained With Random Labels?

We study deep neural networks (DNNs) trained on natural image data with ...
research
05/18/2018

Stop memorizing: A data-dependent regularization framework for intrinsic pattern learning

Deep neural networks (DNNs) typically have enough capacity to fit random...
research
09/25/2017

Generative learning for deep networks

Learning, taking into account full distribution of the data, referred to...

Please sign up or login with your details

Forgot password? Click here to reset