MonoNet: Towards Interpretable Models by Learning Monotonic Features

09/30/2019
by   An-phi Nguyen, et al.
0

Being able to interpret, or explain, the predictions made by a machine learning model is of fundamental importance. This is especially true when there is interest in deploying data-driven models to make high-stakes decisions, e.g. in healthcare. While recent years have seen an increasing interest in interpretable machine learning research, this field is currently lacking an agreed-upon definition of interpretability, and some researchers have called for a more active conversation towards a rigorous approach to interpretability. Joining this conversation, we claim in this paper that the difficulty of interpreting a complex model stems from the existing interactions among features. We argue that by enforcing monotonicity between features and outputs, we are able to reason about the effect of a single feature on an output independently from other features, and consequently better understand the model. We show how to structurally introduce this constraint in deep learning models by adding new simple layers. We validate our model on benchmark datasets, and compare our results with previously proposed interpretable models.

READ FULL TEXT

page 7

page 8

research
02/20/2020

Interpretability of machine learning based prediction models in healthcare

There is a need of ensuring machine learning models that are interpretab...
research
06/29/2017

Interpretability via Model Extraction

The ability to interpret machine learning models has become increasingly...
research
09/11/2020

Deducing neighborhoods of classes from a fitted model

In todays world the request for very complex models for huge data sets i...
research
01/28/2020

Statistical Exploration of Relationships Between Routine and Agnostic Features Towards Interpretable Risk Characterization

As is typical in other fields of application of high throughput systems,...
research
01/27/2020

Interpreting Machine Learning Malware Detectors Which Leverage N-gram Analysis

In cyberattack detection and prevention systems, cybersecurity analysts ...
research
11/30/2022

Interpretability with full complexity by constraining feature information

Interpretability is a pressing issue for machine learning. Common approa...
research
10/21/2019

A game method for improving the interpretability of convolution neural network

Real artificial intelligence always has been focused on by many machine ...

Please sign up or login with your details

Forgot password? Click here to reset