Deducing neighborhoods of classes from a fitted model

09/11/2020
by   Alexander Gerharz, et al.
4

In todays world the request for very complex models for huge data sets is rising steadily. The problem with these models is that by raising the complexity of the models, it gets much harder to interpret them. The growing field of interpretable machine learning tries to make up for the lack of interpretability in these complex (or even blackbox-)models by using specific techniques that can help to understand those models better. In this article a new kind of interpretable machine learning method is presented, which can help to understand the partitioning of the feature space into predicted classes in a classification model using quantile shifts. To illustrate in which situations this quantile shift method (QSM) could become beneficial, it is applied to a theoretical medical example and a real data example. Basically, real data points (or specific points of interest) are used and the changes of the prediction after slightly raising or decreasing specific features are observed. By comparing the predictions before and after the manipulations, under certain conditions the observed changes in the predictions can be interpreted as neighborhoods of the classes with regard to the manipulated features. Chordgraphs are used to visualize the observed changes.

READ FULL TEXT

page 8

page 14

research
06/29/2017

Interpretability via Model Extraction

The ability to interpret machine learning models has become increasingly...
research
09/30/2019

MonoNet: Towards Interpretable Models by Learning Monotonic Features

Being able to interpret, or explain, the predictions made by a machine l...
research
04/23/2021

Grouped Feature Importance and Combined Features Effect Plot

Interpretable machine learning has become a very active area of research...
research
06/05/2019

Interpretable and Differentially Private Predictions

Interpretable predictions, where it is clear why a machine learning mode...
research
11/26/2022

Towards Better Input Masking for Convolutional Neural Networks

The ability to remove features from the input of machine learning models...
research
12/26/2015

Statistical Learning under Nonstationary Mixing Processes

We study a special case of the problem of statistical learning without t...
research
10/11/2021

Quantile-based hydrological modelling

Predictive uncertainty in hydrological modelling is quantified by using ...

Please sign up or login with your details

Forgot password? Click here to reset