Data pruning and neural scaling laws: fundamental limitations of score-based algorithms

02/14/2023
by   Fadhel Ayed, et al.
0

Data pruning algorithms are commonly used to reduce the memory and computational cost of the optimization process. Recent empirical results reveal that random data pruning remains a strong baseline and outperforms most existing data pruning methods in the high compression regime, i.e., where a fraction of 30% or less of the data is kept. This regime has recently attracted a lot of interest as a result of the role of data pruning in improving the so-called neural scaling laws; in [Sorscher et al.], the authors showed the need for high-quality data pruning algorithms in order to beat the sample power law. In this work, we focus on score-based data pruning algorithms and show theoretically and empirically why such algorithms fail in the high compression regime. We demonstrate “No Free Lunch" theorems for data pruning and present calibration protocols that enhance the performance of existing pruning algorithms in this high compression regime using randomization.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/29/2022

Beyond neural scaling laws: beating power law scaling via data pruning

Widely observed neural scaling laws, in which error falls off as a power...
research
06/17/2021

Pruning Randomly Initialized Neural Networks with Iterative Randomization

Pruning the weights of randomly initialized neural networks plays an imp...
research
04/19/2023

Network Pruning Spaces

Network pruning techniques, including weight pruning and filter pruning,...
research
07/05/2021

Connectivity Matters: Neural Network Pruning Through the Lens of Effective Sparsity

Neural network pruning is a fruitful area of research with surging inter...
research
08/17/2021

Scaling Laws for Deep Learning

Running faster will only get you so far – it is generally advisable to f...
research
12/10/2022

Weakest link pruning of a dendrogram

Hierarchical clustering is a popular method for identifying distinct gro...
research
03/26/2023

Does `Deep Learning on a Data Diet' reproduce? Overall yes, but GraNd at Initialization does not

The paper 'Deep Learning on a Data Diet' by Paul et al. (2021) introduce...

Please sign up or login with your details

Forgot password? Click here to reset