Learning sparse features can lead to overfitting in neural networks

by   Leonardo Petrini, et al.

It is widely believed that the success of deep networks lies in their ability to learn a meaningful representation of the features of the data. Yet, understanding when and how this feature learning improves performance remains a challenge: for example, it is beneficial for modern architectures trained to classify images, whereas it is detrimental for fully-connected networks trained for the same task on the same data. Here we propose an explanation for this puzzle, by showing that feature learning can perform worse than lazy training (via random feature kernel or the NTK) as the former can lead to a sparser neural representation. Although sparsity is known to be essential for learning anisotropic data, it is detrimental when the target function is constant or smooth along certain directions of input space. We illustrate this phenomenon in two settings: (i) regression of Gaussian random functions on the d-dimensional unit sphere and (ii) classification of benchmark datasets of images. For (i), we compute the scaling of the generalization error with number of training points, and show that methods that do not learn features generalize better, even when the dimension of the input space is large. For (ii), we show empirically that learning features can indeed lead to sparse and thereby less smooth representations of the image predictors. This fact is plausibly responsible for deteriorating the performance, which is known to be correlated with smoothness along diffeomorphisms.


page 4

page 11

page 12

page 14

page 16

page 17

page 22

page 24


The Curious Case of Benign Memorization

Despite the empirical advances of deep learning across a variety of lear...

Contrasting random and learned features in deep Bayesian linear regression

Understanding how feature learning affects generalization is among the f...

Feature learning in neural networks and kernel machines that recursively learn features

Neural networks have achieved impressive results on many technological a...

Feature Learning Viewpoint of AdaBoost and a New Algorithm

The AdaBoost algorithm has the superiority of resisting overfitting. Und...

Learning Representations from Temporally Smooth Data

Events in the real world are correlated across nearby points in time, an...

Approximation Results for Gradient Descent trained Neural Networks

The paper contains approximation guarantees for neural networks that are...

The Benefits of Mixup for Feature Learning

Mixup, a simple data augmentation method that randomly mixes two data po...

Please sign up or login with your details

Forgot password? Click here to reset