Evaluating prediction systems in software project estimation

01/14/2021
by   Martin Shepperd, et al.
0

Context: Software engineering has a problem in that when we empirically evaluate competing prediction systems we obtain conflicting results. Objective: To reduce the inconsistency amongst validation study results and provide a more formal foundation to interpret results with a particular focus on continuous prediction systems. Method: A new framework is proposed for evaluating competing prediction systems based upon (1) an unbiased statistic, Standardised Accuracy, (2) testing the result likelihood relative to the baseline technique of random 'predictions', that is guessing, and (3) calculation of effect sizes. Results: Previously published empirical evaluations of prediction systems are re-examined and the original conclusions shown to be unsafe. Additionally, even the strongest results are shown to have no more than a medium effect size relative to random guessing. Conclusions: Biased accuracy statistics such as MMRE are deprecated. By contrast this new empirical validation framework leads to meaningful results. Such steps will assist in performing future meta-analyses and in providing more robust and usable recommendations to practitioners.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 8

02/13/2018

Replication studies considered harmful

CONTEXT: There is growing interest in establishing software engineering ...
06/08/2021

Does class size matter? An in-depth assessment of the effect of class size in software defect prediction

In the past 20 years, defect prediction studies have generally acknowled...
11/14/2019

On the Time-Based Conclusion Stability of Software Defect Prediction Models

Researchers in empirical software engineering often make claims based on...
05/16/2021

Investigating the Significance of the Bellwether Effect to Improve Software Effort Prediction: Further Empirical Study

Context: In addressing how best to estimate how much effort is required ...
03/18/2021

The impact of using biased performance metrics on software defect prediction research

Context: Software engineering researchers have undertaken many experimen...
05/24/2022

The Least Difference in Means: A Statistic for Effect Size Strength and Practical Significance

With limited resources, scientific inquiries must be prioritized for fur...
05/11/2021

A better measure of relative prediction accuracy for model selection and model estimation

Surveys show that the mean absolute percentage error (MAPE) is the most ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.