Thinking Beyond Distributions in Testing Machine Learned Models

12/06/2021
by   Negar Rostamzadeh, et al.
31

Testing practices within the machine learning (ML) community have centered around assessing a learned model's predictive performance measured against a test dataset, often drawn from the same distribution as the training dataset. While recent work on robustness and fairness testing within the ML community has pointed to the importance of testing against distributional shifts, these efforts also focus on estimating the likelihood of the model making an error against a reference dataset/distribution. We argue that this view of testing actively discourages researchers and developers from looking into other sources of robustness failures, for instance corner cases which may have severe undesirable impacts. We draw parallels with decades of work within software engineering testing focused on assessing a software system against various stress conditions, including corner cases, as opposed to solely focusing on average-case behaviour. Finally, we put forth a set of recommendations to broaden the view of machine learning testing to a rigorous practice.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/10/2023

Understanding the Complexity and Its Impact on Testing in ML-Enabled Systems

Machine learning (ML) enabled systems are emerging with recent breakthro...
research
02/19/2021

Mutation Testing framework for Machine Learning

This is an article or technical note which is intended to provides an in...
research
04/29/2023

Towards machine learning guided by best practices

Nowadays, machine learning (ML) is being used in software systems with m...
research
12/05/2018

On Testing Machine Learning Programs

Nowadays, we are witnessing a wide adoption of Machine learning (ML) mod...
research
10/06/2020

Astraea: Grammar-based Fairness Testing

Software often produces biased outputs. In particular, machine learning ...
research
05/11/2022

Evaluation Gaps in Machine Learning Practice

Forming a reliable judgement of a machine learning (ML) model's appropri...
research
05/04/2023

On the nonlinear correlation of ML performance between data subpopulations

Understanding the performance of machine learning (ML) models across div...

Please sign up or login with your details

Forgot password? Click here to reset