Can Neural Network Memorization Be Localized?

07/18/2023
by   Pratyush Maini, et al.
0

Recent efforts at explaining the interplay of memorization and generalization in deep overparametrized networks have posited that neural networks memorize "hard" examples in the final few layers of the model. Memorization refers to the ability to correctly predict on atypical examples of the training set. In this work, we show that rather than being confined to individual layers, memorization is a phenomenon confined to a small set of neurons in various layers of the model. First, via three experimental sources of converging evidence, we find that most layers are redundant for the memorization of examples and the layers that contribute to example memorization are, in general, not the final layers. The three sources are gradient accounting (measuring the contribution to the gradient norms from memorized and clean examples), layer rewinding (replacing specific model weights of a converged model with previous training checkpoints), and retraining (training rewound layers only on clean examples). Second, we ask a more generic question: can memorization be localized anywhere in a model? We discover that memorization is often confined to a small number of neurons or channels (around 5) of the model. Based on these insights we propose a new form of dropout – example-tied dropout that enables us to direct the memorization of examples to an apriori determined set of neurons. By dropping out these neurons, we are able to reduce the accuracy on memorized examples from 100%→3%, while also reducing the generalization gap.

READ FULL TEXT
research
05/23/2018

Excitation Dropout: Encouraging Plasticity in Deep Neural Networks

We propose a guided dropout regularizer for deep networks based on the e...
research
05/30/2021

On the geometry of generalization and memorization in deep neural networks

Understanding how large neural networks avoid memorizing training data i...
research
05/23/2019

Multi-Sample Dropout for Accelerated Training and Better Generalization

Dropout is a simple but efficient regularization technique for achieving...
research
04/08/2021

A single gradient step finds adversarial examples on random two-layers neural networks

Daniely and Schacham recently showed that gradient descent finds adversa...
research
06/12/2013

Understanding Dropout: Training Multi-Layer Perceptrons with Auxiliary Independent Stochastic Neurons

In this paper, a simple, general method of adding auxiliary stochastic n...
research
06/04/2022

On the Generalization Power of the Overfitted Three-Layer Neural Tangent Kernel Model

In this paper, we study the generalization performance of overparameteri...
research
04/09/2020

Orthogonal Over-Parameterized Training

The inductive bias of a neural network is largely determined by the arch...

Please sign up or login with your details

Forgot password? Click here to reset