Learning sparse features can lead to overfitting in neural networks

06/24/2022
by   Leonardo Petrini, et al.
31

It is widely believed that the success of deep networks lies in their ability to learn a meaningful representation of the features of the data. Yet, understanding when and how this feature learning improves performance remains a challenge: for example, it is beneficial for modern architectures trained to classify images, whereas it is detrimental for fully-connected networks trained for the same task on the same data. Here we propose an explanation for this puzzle, by showing that feature learning can perform worse than lazy training (via random feature kernel or the NTK) as the former can lead to a sparser neural representation. Although sparsity is known to be essential for learning anisotropic data, it is detrimental when the target function is constant or smooth along certain directions of input space. We illustrate this phenomenon in two settings: (i) regression of Gaussian random functions on the d-dimensional unit sphere and (ii) classification of benchmark datasets of images. For (i), we compute the scaling of the generalization error with number of training points, and show that methods that do not learn features generalize better, even when the dimension of the input space is large. For (ii), we show empirically that learning features can indeed lead to sparse and thereby less smooth representations of the image predictors. This fact is plausibly responsible for deteriorating the performance, which is known to be correlated with smoothness along diffeomorphisms.

READ FULL TEXT

page 4

page 11

page 12

page 14

page 16

page 17

page 22

page 24

research
10/25/2022

The Curious Case of Benign Memorization

Despite the empirical advances of deep learning across a variety of lear...
research
03/01/2022

Contrasting random and learned features in deep Bayesian linear regression

Understanding how feature learning affects generalization is among the f...
research
12/28/2022

Feature learning in neural networks and kernel machines that recursively learn features

Neural networks have achieved impressive results on many technological a...
research
04/08/2019

Feature Learning Viewpoint of AdaBoost and a New Algorithm

The AdaBoost algorithm has the superiority of resisting overfitting. Und...
research
12/12/2020

Learning Representations from Temporally Smooth Data

Events in the real world are correlated across nearby points in time, an...
research
09/09/2023

Approximation Results for Gradient Descent trained Neural Networks

The paper contains approximation guarantees for neural networks that are...
research
03/15/2023

The Benefits of Mixup for Feature Learning

Mixup, a simple data augmentation method that randomly mixes two data po...

Please sign up or login with your details

Forgot password? Click here to reset