Repeated Random Sampling for Minimizing the Time-to-Accuracy of Learning

05/28/2023
by   Patrik Okanovic, et al.
0

Methods for carefully selecting or generating a small set of training data to learn from, i.e., data pruning, coreset selection, and data distillation, have been shown to be effective in reducing the ever-increasing cost of training neural networks. Behind this success are rigorously designed strategies for identifying informative training examples out of large datasets. However, these strategies come with additional computational costs associated with subset selection or data distillation before training begins, and furthermore, many are shown to even under-perform random sampling in high data compression regimes. As such, many data pruning, coreset selection, or distillation methods may not reduce 'time-to-accuracy', which has become a critical efficiency measure of training deep neural networks over large datasets. In this work, we revisit a powerful yet overlooked random sampling strategy to address these challenges and introduce an approach called Repeated Sampling of Random Subsets (RSRS or RS2), where we randomly sample the subset of training data for each epoch of model training. We test RS2 against thirty state-of-the-art data pruning and data distillation methods across four datasets including ImageNet. Our results demonstrate that RS2 significantly reduces time-to-accuracy compared to existing techniques. For example, when training on ImageNet in the high-compression regime (using less than 10 yields accuracy improvements up to 29 while offering a runtime reduction of 7x. Beyond the above meta-study, we provide a convergence analysis for RS2 and discuss its generalization capability. The primary goal of our work is to establish RS2 as a competitive baseline for future data selection or distillation techniques aimed at efficient training.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/30/2023

MILO: Model-Agnostic Subset Selection Framework for Efficient Model Training and Tuning

Training deep networks and tuning hyperparameters on large datasets is c...
research
05/01/2023

Towards a Phenomenological Understanding of Neural Networks: Data

A theory of neural networks (NNs) built upon collective variables would ...
research
10/28/2022

Coverage-centric Coreset Selection for High Pruning Rates

One-shot coreset selection aims to select a subset of the training data,...
research
05/19/2022

Dataset Pruning: Reducing Training Data by Examining Generalization Influence

The great success of deep learning heavily relies on increasingly larger...
research
07/20/2023

Investigating minimizing the training set fill distance in machine learning regression

Many machine learning regression methods leverage large datasets for tra...
research
04/09/2018

Large scale distributed neural network training through online distillation

Techniques such as ensembling and distillation promise model quality imp...
research
07/16/2023

Dataset Distillation Meets Provable Subset Selection

Deep learning has grown tremendously over recent years, yielding state-o...

Please sign up or login with your details

Forgot password? Click here to reset