Neural Network Memorization Dissection

11/21/2019
by   Jindong Gu, et al.
0

Deep neural networks (DNNs) can easily fit a random labeling of the training data with zero training error. What is the difference between DNNs trained with random labels and the ones trained with true labels? Our paper answers this question with two contributions. First, we study the memorization properties of DNNs. Our empirical experiments shed light on how DNNs prioritize the learning of simple input patterns. In the second part, we propose to measure the similarity between what different DNNs have learned and memorized. With the proposed approach, we analyze and compare DNNs trained on data with true labels and random labels. The analysis shows that DNNs have One way to Learn and N ways to Memorize. We also use gradient information to gain an understanding of the analysis results.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/21/2021

On the Memorization Properties of Contrastive Learning

Memorization studies of deep neural networks (DNNs) help to understand w...
research
02/21/2018

Detecting Learning vs Memorization in Deep Neural Networks using Shared Structure Validation Sets

The roles played by learning and memorization represent an important top...
research
06/18/2020

What Do Neural Networks Learn When Trained With Random Labels?

We study deep neural networks (DNNs) trained on natural image data with ...
research
02/13/2022

Neural Network Trojans Analysis and Mitigation from the Input Domain

Deep Neural Networks (DNNs) can learn Trojans (or backdoors) from benign...
research
02/04/2021

HYDRA: Hypergradient Data Relevance Analysis for Interpreting Deep Neural Networks

The behaviors of deep neural networks (DNNs) are notoriously resistant t...
research
05/05/2021

A Theoretical-Empirical Approach to Estimating Sample Complexity of DNNs

This paper focuses on understanding how the generalization error scales ...
research
04/29/2020

Rethink the Connections among Generalization, Memorization and the Spectral Bias of DNNs

Over-parameterized deep neural networks (DNNs) with sufficient capacity ...

Please sign up or login with your details

Forgot password? Click here to reset