More data or more parameters? Investigating the effect of data structure on generalization

03/09/2021
by   Stéphane d'Ascoli, et al.
0

One of the central features of deep learning is the generalization abilities of neural networks, which seem to improve relentlessly with over-parametrization. In this work, we investigate how properties of data impact the test error as a function of the number of training examples and number of training parameters; in other words, how the structure of data shapes the "generalization phase space". We first focus on the random features model trained in the teacher-student scenario. The synthetic input data is composed of independent blocks, which allow us to tune the saliency of low-dimensional structures and their relevance with respect to the target function. Using methods from statistical physics, we obtain an analytical expression for the train and test errors for both regression and classification tasks in the high-dimensional limit. The derivation allows us to show that noise in the labels and strong anisotropy of the input data play similar roles on the test error. Both promote an asymmetry of the phase space where increasing the number of training examples improves generalization further than increasing the number of training parameters. Our analytical insights are confirmed by numerical experiments involving fully-connected networks trained on MNIST and CIFAR10.

READ FULL TEXT

page 8

page 10

research
06/08/2021

The Randomness of Input Data Spaces is an A Priori Predictor for Generalization

Over-parameterized models can perfectly learn various types of data dist...
research
12/01/2022

The Effect of Data Dimensionality on Neural Network Prunability

Practitioners prune neural networks for efficiency gains and generalizat...
research
06/05/2020

Triple descent and the two kinds of overfitting: Where why do they appear?

A recent line of research has highlighted the existence of a double desc...
research
05/30/2019

Meta Dropout: Learning to Perturb Features for Generalization

A machine learning model that generalizes well should obtain low errors ...
research
02/21/2021

Synthesizing Irreproducibility in Deep Networks

The success and superior performance of deep networks is spreading their...
research
11/11/2019

Evaluating Combinatorial Generalization in Variational Autoencoders

We evaluate the ability of variational autoencoders to generalize to uns...
research
05/26/2019

Asymptotic learning curves of kernel methods: empirical data v.s. Teacher-Student paradigm

How many training data are needed to learn a supervised task? It is ofte...

Please sign up or login with your details

Forgot password? Click here to reset