Decoding machine learning benchmarks

07/29/2020
by   Lucas F. F. Cardoso, et al.
0

Despite the availability of benchmark machine learning (ML) repositories (e.g., UCI, OpenML), there is no standard evaluation strategy yet capable of pointing out which is the best set of datasets to serve as gold standard to test different ML algorithms. In recent studies, Item Response Theory (IRT) has emerged as a new approach to elucidate what should be a good ML benchmark. This work applied IRT to explore the well-known OpenML-CC18 benchmark to identify how suitable it is on the evaluation of classifiers. Several classifiers ranging from classical to ensembles ones were evaluated using IRT models, which could simultaneously estimate dataset difficulty and classifiers' ability. The Glicko-2 rating system was applied on the top of IRT to summarize the innate ability and aptitude of classifiers. It was observed that not all datasets from OpenML-CC18 are really useful to evaluate classifiers. Most datasets evaluated in this work (84 difficult instances only). Also, 80 are very discriminating ones, which can be of great use for pairwise algorithm comparison, but not useful to push classifiers abilities. This paper presents this new evaluation methodology based on IRT as well as the tool decodIRT, developed to guide IRT estimation over ML benchmarks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/15/2021

Data vs classifiers, who wins?

The classification experiments covered by machine learning (ML) are comp...
research
04/16/2023

MLRegTest: A Benchmark for the Machine Learning of Regular Languages

Evaluating machine learning (ML) systems on their ability to learn known...
research
06/15/2023

AQuA: A Benchmarking Tool for Label Quality Assessment

Machine learning (ML) models are only as good as the data they are train...
research
06/19/2013

Hacking Smart Machines with Smarter Ones: How to Extract Meaningful Data from Machine Learning Classifiers

Machine Learning (ML) algorithms are used to train computers to perform ...
research
09/26/2022

Prayatul Matrix: A Direct Comparison Approach to Evaluate Performance of Supervised Machine Learning Models

Performance comparison of supervised machine learning (ML) models are wi...
research
07/14/2021

Generative and reproducible benchmarks for comprehensive evaluation of machine learning classifiers

Understanding the strengths and weaknesses of machine learning (ML) algo...
research
11/11/2019

Item Response Theory based Ensemble in Machine Learning

In this article, we propose a novel probabilistic framework to improve t...

Please sign up or login with your details

Forgot password? Click here to reset