
Does Learning Require Memorization? A Short Tale about a Long Tail
Stateoftheart results on image recognition tasks are achieved using o...
read it

Estimating Training Data Influence by Tracking Gradient Descent
We introduce a method called TrackIn that computes the influence of a tr...
read it

Towards Interpreting and Mitigating Shortcut Learning Behavior of NLU models
Recent studies indicate that NLU models are prone to rely on shortcut fe...
read it

RelatIF: Identifying Explanatory Training Examples via Relative Influence
In this work, we focus on the use of influence functions to identify rel...
read it

Making Coherence Out of Nothing At All: Measuring the Evolution of Gradient Alignment
We propose a new metric (mcoherence) to experimentally study the alignm...
read it

An analysis of training and generalization errors in shallow and deep networks
An open problem around deep networks is the apparent absence of overfit...
read it

Estimating Example Difficulty using Variance of Gradients
In machine learning, a question of great interest is understanding what ...
read it
What Neural Networks Memorize and Why: Discovering the Long Tail via Influence Estimation
Deep learning algorithms are wellknown to have a propensity for fitting the training data very well and often fit even outliers and mislabeled data points. Such fitting requires memorization of training data labels, a phenomenon that has attracted significant research interest but has not been given a compelling explanation so far. A recent work of Feldman (2019) proposes a theoretical explanation for this phenomenon based on a combination of two insights. First, natural image and data distributions are (informally) known to be longtailed, that is have a significant fraction of rare and atypical examples. Second, in a simple theoretical model such memorization is necessary for achieving closetooptimal generalization error when the data distribution is longtailed. However, no direct empirical evidence for this explanation or even an approach for obtaining such evidence were given. In this work we design experiments to test the key ideas in this theory. The experiments require estimation of the influence of each training example on the accuracy at each test example as well as memorization values of training examples. Estimating these quantities directly is computationally prohibitive but we show that closelyrelated subsampled influence and memorization values can be estimated much more efficiently. Our experiments demonstrate the significant benefits of memorization for generalization on several standard benchmarks. They also provide quantitative and visually compelling evidence for the theory put forth in (Feldman, 2019).
READ FULL TEXT
Comments
There are no comments yet.