We need to talk about random seeds

10/24/2022
by   Steven Bethard, et al.
0

Modern neural network libraries all take as a hyperparameter a random seed, typically used to determine the initial state of the model parameters. This opinion piece argues that there are some safe uses for random seeds: as part of the hyperparameter search to select a good model, creating an ensemble of several models, or measuring the sensitivity of the training algorithm to the random seed hyperparameter. It argues that some uses for random seeds are risky: using a fixed random seed for "replicability" and varying only the random seed to create score distributions for performance comparison. An analysis of 85 recent publications from the ACL Anthology finds that more than 50

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/21/2019

Bayesian Optimization Allowing for Common Random Numbers

Bayesian optimization is a powerful tool for expensive stochastic black-...
research
06/12/2020

Reflection on modern methods: Good practices for applied statistical learning in epidemiology

Statistical learning (SL) includes methods that extract knowledge from c...
research
10/01/2018

On the discovery of the seed in uniform attachment trees

We investigate the size of vertex confidence sets for including part of ...
research
12/18/2022

Rare-Seed Generation for Fuzzing

Starting with a random initial seed, fuzzers search for inputs that trig...
research
09/23/2019

On Model Stability as a Function of Random Seed

In this paper, we focus on quantifying model stability as a function of ...
research
10/07/2022

To tree or not to tree? Assessing the impact of smoothing the decision boundaries

When analyzing a dataset, it can be useful to assess how smooth the deci...
research
08/10/2023

Seed Kernel Counting using Domain Randomization and Object Tracking Neural Networks

High-throughput phenotyping (HTP) of seeds, also known as seed phenotypi...

Please sign up or login with your details

Forgot password? Click here to reset