On Evaluating the Quality of Rule-Based Classification Systems

04/06/2020
by   Nassim Dehouche, et al.
0

Two indicators are classically used to evaluate the quality of rule-based classification systems: predictive accuracy, i.e. the system's ability to successfully reproduce learning data and coverage, i.e. the proportion of possible cases for which the logical rules constituting the system apply. In this work, we claim that these two indicators may be insufficient, and additional measures of quality may need to be developed. We theoretically show that classification systems presenting "good" predictive accuracy and coverage can, nonetheless, be trivially improved and illustrate this proposition with examples.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/07/2017

When rule-based models need to count

Rule-based modelers dislike direct enumeration of cases when more effici...
research
08/09/2012

Comparison of different T-norm operators in classification problems

Fuzzy rule based classification systems are one of the most popular fuzz...
research
10/31/2022

Computing Rule-Based Explanations by Leveraging Counterfactuals

Sophisticated machine models are increasingly used for high-stakes decis...
research
10/04/2022

Learning Condition–Action Rules for Personalised Journey Recommendations

We apply a learning classifier system, XCSI, to the task of providing pe...
research
07/11/2020

KARB Solution: Compliance to Quality by Rule Based Benchmarking

Instead of proofs or logical evaluations, compliance assessment could be...
research
04/03/2020

A rigorous method to compare interpretability of rule-based algorithms

Interpretability is becoming increasingly important in predictive model ...
research
10/02/2019

Boosting Image Recognition with Non-differentiable Constraints

In this paper, we study the problem of image recognition with non-differ...

Please sign up or login with your details

Forgot password? Click here to reset