Sensitivity and Generalization in Neural Networks: an Empirical Study

by   Roman Novak, et al.

In practice it is often found that large over-parameterized neural networks generalize better than their smaller counterparts, an observation that appears to conflict with classical notions of function complexity, which typically favor smaller models. In this work, we investigate this tension between complexity and generalization through an extensive empirical exploration of two natural metrics of complexity related to sensitivity to input perturbations. Our experiments survey thousands of models with various fully-connected architectures, optimizers, and other hyper-parameters, as well as four different image classification datasets. We find that trained neural networks are more robust to input perturbations in the vicinity of the training data manifold, as measured by the norm of the input-output Jacobian of the network, and that it correlates well with generalization. We further establish that factors associated with poor generalization - such as full-batch training or using random labels - correspond to lower robustness, while factors associated with good generalization - such as data augmentation and ReLU non-linearities - give rise to more robust functions. Finally, we demonstrate how the input-output Jacobian norm can be predictive of generalization at the level of individual test points.


Predicting Out-of-Domain Generalization with Local Manifold Smoothness

Understanding how machine learning models generalize to new environments...

Formalizing Generalization and Robustness of Neural Networks to Weight Perturbations

Studying the sensitivity of weight perturbation in neural networks and i...

Towards Quantifying Intrinsic Generalization of Deep ReLU Networks

Understanding the underlying mechanisms that enable the empirical succes...

An empirical study of the relation between network architecture and complexity

In this preregistration submission, we propose an empirical study of how...

Pointer Value Retrieval: A new benchmark for understanding the limits of neural network generalization

The successes of deep learning critically rely on the ability of neural ...

Identity Crisis: Memorization and Generalization under Extreme Overparameterization

We study the interplay between memorization and generalization of overpa...

On the Transferability of Winning Tickets in Non-Natural Image Datasets

We study the generalization properties of pruned neural networks that ar...