A Step Toward Quantifying Independently Reproducible Machine Learning Research

09/14/2019
by   Edward Raff, et al.
0

What makes a paper independently reproducible? Debates on reproducibility center around intuition or assumptions but lack empirical results. Our field focuses on releasing code, which is important, but is not sufficient for determining reproducibility. We take the first step toward a quantifiable answer by manually attempting to implement 255 papers published from 1984 until 2017, recording features of each paper, and performing statistical analysis of the results. For each paper, we did not look at the authors code, if released, in order to prevent bias toward discrepancies between code and paper.

READ FULL TEXT
research
03/27/2020

Improving Reproducibility in Machine Learning Research (A Report from the NeurIPS 2019 Reproducibility Program)

One of the challenges in machine learning research is to ensure that pre...
research
09/21/2021

Toward Reusable Science with Readable Code and Reproducibility

An essential part of research and scientific communication is researcher...
research
05/01/2020

Code Replicability in Computer Graphics

Being able to duplicate published research results is an important proce...
research
07/02/2019

Reproducibility in Machine Learning for Health

Machine learning algorithms designed to characterize, monitor, and inter...
research
04/09/2022

A Siren Song of Open Source Reproducibility

As reproducibility becomes a greater concern, conferences have largely c...
research
08/18/2020

Creating optimal conditions for reproducible data analysis in R with 'fertile'

The advancement of scientific knowledge increasingly depends on ensuring...
research
04/10/2012

Publishing Identifiable Experiment Code And Configuration Is Important, Good and Easy

We argue for the value of publishing the exact code, configuration and d...

Please sign up or login with your details

Forgot password? Click here to reset