SMACE: A New Method for the Interpretability of Composite Decision Systems

11/16/2021
by   Gianluigi Lopardo, et al.
0

Interpretability is a pressing issue for decision systems. Many post hoc methods have been proposed to explain the predictions of any machine learning model. However, business processes and decision systems are rarely centered around a single, standalone model. These systems combine multiple models that produce key predictions, and then apply decision rules to generate the final decision. To explain such decision, we present SMACE, Semi-Model-Agnostic Contextual Explainer, a novel interpretability method that combines a geometric approach for decision rules with existing post hoc solutions for machine learning models to generate an intuitive feature ranking tailored to the end user. We show that established model-agnostic approaches produce poor results in this framework.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/15/2023

Understanding Post-hoc Explainers: The Case of Anchors

In many scenarios, the interpretability of machine learning models is a ...
research
10/18/2019

Many Faces of Feature Importance: Comparing Built-in and Post-hoc Feature Importance in Text Classification

Feature importance is commonly used to explain machine predictions. Whil...
research
08/11/2019

LoRMIkA: Local Rule-based Model Interpretability with k-optimal Associations

As we rely more and more on machine learning models for real-life decisi...
research
04/08/2019

Quantifying Interpretability of Arbitrary Machine Learning Models Through Functional Decomposition

To obtain interpretable machine learning models, either interpretable mo...
research
05/27/2022

A Sea of Words: An In-Depth Analysis of Anchors for Text Data

Anchors [Ribeiro et al. (2018)] is a post-hoc, rule-based interpretabili...
research
11/28/2021

Multicriteria interpretability driven Deep Learning

Deep Learning methods are renowned for their performances, yet their lac...
research
11/20/2017

"I know it when I see it". Visualization and Intuitive Interpretability

Most research on the interpretability of machine learning systems focuse...

Please sign up or login with your details

Forgot password? Click here to reset