Using Small Proxy Datasets to Accelerate Hyperparameter Search

06/12/2019
by   Sam Shleifer, et al.
0

One of the biggest bottlenecks in a machine learning workflow is waiting for models to train. Depending on the available computing resources, it can take days to weeks to train a neural network on a large dataset with many classes such as ImageNet. For researchers experimenting with new algorithmic approaches, this is impractically time consuming and costly. We aim to generate smaller "proxy datasets" where experiments are cheaper to run but results are highly correlated with experimental results on the full dataset. We generate these proxy datasets using by randomly sampling from examples or classes, training on only the easiest or hardest examples and training on synthetic examples generated by "data distillation". We compare these techniques to the more widely used baseline of training on the full dataset for fewer epochs. For each proxying strategy, we estimate three measures of "proxy quality": how much of the variance in experimental results on the full dataset can be explained by experimental results on the proxy dataset. Experiments on Imagenette and Imagewoof (Howard, 2019) show that running hyperparameter search on the easiest 10 variance in experiment results on the target task, and using the easiest 50 examples can explain 95 all the data for fewer epochs, a more widely used baseline. These "easy" proxies are higher quality than training on the full dataset for a reduced number of epochs (but equivalent computational cost), and, unexpectedly, higher quality than proxies constructed from the hardest examples. Without access to a trained model, researchers can improve proxy quality by restricting the subset to fewer classes; proxies built on half the classes are higher quality than those with an equivalent number of examples spread across all classes.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/20/2023

GIO: Gradient Information Optimization for Training Dataset Selection

It is often advantageous to train models on a subset of the available tr...
research
05/18/2022

Hyperparameter Optimization with Neural Network Pruning

Since the deep learning model is highly dependent on hyperparameters, hy...
research
02/08/2023

Two-step hyperparameter optimization method: Accelerating hyperparameter search by using a fraction of a training dataset

Hyperparameter optimization (HPO) can be an important step in machine le...
research
04/19/2021

Improving Adversarial Robustness Using Proxy Distributions

We focus on the use of proxy distributions, i.e., approximations of the ...
research
07/12/2021

The Power of Proxy Data and Proxy Networks for Hyper-Parameter Optimization in Medical Image Segmentation

Deep learning models for medical image segmentation are primarily data-d...
research
06/22/2023

xSIM++: An Improved Proxy to Bitext Mining Performance for Low-Resource Languages

We introduce a new proxy score for evaluating bitext mining based on sim...
research
08/26/2020

Estimating Example Difficulty using Variance of Gradients

In machine learning, a question of great interest is understanding what ...

Please sign up or login with your details

Forgot password? Click here to reset