Learning Qualitatively Diverse and Interpretable Rules for Classification

06/22/2018
by   Andrew Slavin Ross, et al.
0

There has been growing interest in developing accurate models that can also be explained to humans. Unfortunately, if there exist multiple distinct but accurate models for some dataset, current machine learning methods are unlikely to find them: standard techniques will likely recover a complex model that combines them. In this work, we introduce a way to identify a maximal set of distinct but accurate models for a dataset. We demonstrate empirically that, in situations where the data supports multiple accurate classifiers, we tend to recover simpler, more interpretable classifiers rather than more complex ones.

READ FULL TEXT

page 4

page 5

page 6

research
07/31/2018

Techniques for Interpretable Machine Learning

Interpretable machine learning tackles the important problem that humans...
research
01/06/2021

Statistical learning for accurate and interpretable battery lifetime prediction

Data-driven methods for battery lifetime prediction are attracting incre...
research
01/08/2023

Prognosis and Treatment Prediction of Type-2 Diabetes Using Deep Neural Network and Machine Learning Classifiers

Type 2 Diabetes is a fast-growing, chronic metabolic disorder due to imb...
research
08/15/2023

Back to Basics: A Sanity Check on Modern Time Series Classification Algorithms

The state-of-the-art in time series classification has come a long way, ...
research
10/24/2019

Accurate Layerwise Interpretable Competence Estimation

Estimating machine learning performance 'in the wild' is both an importa...
research
08/05/2019

A study in Rashomon curves and volumes: A new perspective on generalization and model simplicity in machine learning

The Rashomon effect occurs when many different explanations exist for th...

Please sign up or login with your details

Forgot password? Click here to reset