Towards Inferential Reproducibility of Machine Learning Research

02/08/2023
by   Michael Hagmann, et al.
0

Reliability of machine learning evaluation – the consistency of observed evaluation scores across replicated model training runs – is affected by several sources of nondeterminism which can be regarded as measurement noise. Current tendencies to remove noise in order to enforce reproducibility of research results neglect inherent nondeterminism at the implementation level and disregard crucial interaction effects between algorithmic noise factors and data properties. This limits the scope of conclusions that can be drawn from such experiments. Instead of removing noise, we propose to incorporate several sources of variance, including their interaction with data properties, into an analysis of significance and reliability of machine learning evaluation, with the aim to draw inferences beyond particular instances of trained models. We show how to use linear mixed effects models (LMEMs) to analyze performance evaluation scores, and to conduct statistical inference with a generalized likelihood ratio test (GLRT). This allows us to incorporate arbitrary sources of noise like meta-parameter variations into statistical significance testing, and to assess performance differences conditional on data properties. Furthermore, a variance component analysis (VCA) enables the analysis of the contribution of noise sources to overall variance and the computation of a reliability coefficient by the ratio of substantial to total variance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/09/2020

varTestnlme: Variance Components Testing in Linear and Nonlinear Mixed-effects Models

The issue of variance components testing arises naturally when building ...
research
12/22/2017

Likelihood ratio test for variance components in nonlinear mixed effects models

Mixed effects models are widely used to describe heterogeneity in a popu...
research
04/04/2023

Calibrated Chaos: Variance Between Runs of Neural Network Training is Harmless and Inevitable

Typical neural network trainings have substantial variance in test-set p...
research
10/11/2020

What causes the test error? Going beyond bias-variance via ANOVA

Modern machine learning methods are often overparametrized, allowing ada...
research
03/26/2018

Why Comparing Single Performance Scores Does Not Allow to Draw Conclusions About Machine Learning Approaches

Developing state-of-the-art approaches for specific tasks is a major dri...
research
04/12/2022

Quantified Reproducibility Assessment of NLP Results

This paper describes and tests a method for carrying out quantified repr...
research
02/10/2021

Once is Never Enough: Foundations for Sound Statistical Inference in Tor Network Experimentation

Tor is a popular low-latency anonymous communication system that focuses...

Please sign up or login with your details

Forgot password? Click here to reset