Relevance As a Metric for Evaluating Machine Learning Algorithms

In machine learning, the choice of a learning algorithm that is suitable for the application domain is critical. The performance metric used to compare different algorithms must also reflect the concerns of users in the application domain under consideration. In this work, we propose a novel probability-based performance metric called Relevance Score for evaluating supervised learning algorithms. We evaluate the proposed metric through empirical analysis on a dataset gathered from an intelligent lighting pilot installation. In comparison to the commonly used Classification Accuracy metric, the Relevance Score proves to be more appropriate for a certain class of applications.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/03/2022

Relevance in Dialogue: Is Less More? An Empirical Comparison of Existing Metrics, and a Novel Simple Metric

In this work, we evaluate various existing dialogue relevance metrics, f...
research
09/05/2017

A Statistical Approach to Increase Classification Accuracy in Supervised Learning Algorithms

Probabilistic mixture models have been widely used for different machine...
research
09/12/2022

Analysis and Comparison of Classification Metrics

A number of different performance metrics are commonly used in the machi...
research
07/22/2015

An Empirical Comparison of SVM and Some Supervised Learning Algorithms for Vowel recognition

In this article, we conduct a study on the performance of some supervise...
research
09/12/2023

Rethinking Evaluation Metric for Probability Estimation Models Using Esports Data

Probability estimation models play an important role in various fields, ...
research
07/14/2023

Multi-Dimensional Ability Diagnosis for Machine Learning Algorithms

Machine learning algorithms have become ubiquitous in a number of applic...

Please sign up or login with your details

Forgot password? Click here to reset