Guaranteeing Reproducibility in Deep Learning Competitions

05/12/2020
by   Brandon Houghton, et al.
3

To encourage the development of methods with reproducible and robust training behavior, we propose a challenge paradigm where competitors are evaluated directly on the performance of their learning procedures rather than pre-trained agents. Since competition organizers re-train proposed methods in a controlled setting they can guarantee reproducibility, and – by retraining submissions using a held-out test set – help ensure generalization past the environments on which they were trained.

READ FULL TEXT

page 1

page 2

page 3

research
02/09/2022

Reproducibility in Optimization: Theoretical Framework and Limits

We initiate a formal study of reproducibility in optimization. We define...
research
02/23/2022

Deep Learning Reproducibility and Explainable AI (XAI)

The nondeterminism of Deep Learning (DL) training algorithms and its inf...
research
07/22/2021

Reproducibility of COVID-19 pre-prints

To examine the reproducibility of COVID-19 research, we create a dataset...
research
11/20/2022

Structure-Encoding Auxiliary Tasks for Improved Visual Representation in Vision-and-Language Navigation

In Vision-and-Language Navigation (VLN), researchers typically take an i...
research
07/07/2023

SpawnNet: Learning Generalizable Visuomotor Skills from Pre-trained Networks

The existing internet-scale image and video datasets cover a wide range ...
research
10/20/2022

Reproducibility of the Methods in Medical Imaging with Deep Learning

Concerns about the reproducibility of deep learning research are more pr...
research
07/16/2018

Use Factorial Design To Improve Experimental Reproducibility

Systematic differences in experimental materials, methods, measurements,...

Please sign up or login with your details

Forgot password? Click here to reset