In Search of Robust Measures of Generalization

One of the principal scientific challenges in deep learning is explaining generalization, i.e., why the particular way the community now trains networks to achieve small training error also leads to small error on held-out data from the same population. It is widely appreciated that some worst-case theories – such as those based on the VC dimension of the class of predictors induced by modern neural network architectures – are unable to explain empirical performance. A large volume of work aims to close this gap, primarily by developing bounds on generalization error, optimization error, and excess risk. When evaluated empirically, however, most of these bounds are numerically vacuous. Focusing on generalization bounds, this work addresses the question of how to evaluate such bounds empirically. Jiang et al. (2020) recently described a large-scale empirical study aimed at uncovering potential causal relationships between bounds/measures and generalization. Building on their study, we highlight where their proposed methods can obscure failures and successes of generalization measures in explaining generalization. We argue that generalization measures should instead be evaluated within the framework of distributional robustness.

READ FULL TEXT

page 7

page 17

page 18

page 23

page 25

page 26

research
12/04/2019

Fantastic Generalization Measures and Where to Find Them

Generalization of deep networks has been of great interest in recent yea...
research
06/07/2019

On the Current State of Research in Explaining Ensemble Performance Using Margins

Empirical evidence shows that ensembles, such as bagging, boosting, rand...
research
02/03/2021

Information-Theoretic Bounds on the Moments of the Generalization Error of Learning Algorithms

Generalization error bounds are critical to understanding the performanc...
research
11/15/2011

Estimated VC dimension for risk bounds

Vapnik-Chervonenkis (VC) dimension is a fundamental measure of the gener...
research
04/08/2021

Gi and Pal Scores: Deep Neural Network Generalization Statistics

The field of Deep Learning is rich with empirical evidence of human-like...
research
06/10/2019

On the Insufficiency of the Large Margins Theory in Explaining the Performance of Ensemble Methods

Boosting and other ensemble methods combine a large number of weak class...
research
10/17/2021

Explaining generalization in deep learning: progress and fundamental limits

This dissertation studies a fundamental open challenge in deep learning ...

Please sign up or login with your details

Forgot password? Click here to reset