Altruist: Argumentative Explanations through Local Interpretations of Predictive Models

10/15/2020
by   Ioannis Mollas, et al.
0

Interpretable machine learning is an emerging field providing solutions on acquiring insights into machine learning models' rationale. It has been put in the map of machine learning by suggesting ways to tackle key ethical and societal issues. However, existing techniques of interpretable machine learning are far from being comprehensible and explainable to the end user. Another key issue in this field is the lack of evaluation and selection criteria, making it difficult for the end user to choose the most appropriate interpretation technique for its use. In this study, we introduce a meta-explanation methodology that will provide truthful interpretations, in terms of feature importance, to the end user through argumentation. At the same time, this methodology can be used as an evaluation or selection tool for multiple interpretation techniques based on feature importance.

READ FULL TEXT

page 1

page 2

page 3

page 4

07/31/2018

Techniques for Interpretable Machine Learning

Interpretable machine learning tackles the important problem that humans...
05/27/2021

Intellige: A User-Facing Model Explainer for Narrative Explanations

Predictive machine learning models often lack interpretability, resultin...
07/03/2022

Interpretable by Design: Learning Predictors by Composing Interpretable Queries

There is a growing concern about typically opaque decision-making with h...
09/10/2019

NormLime: A New Feature Importance Metric for Explaining Deep Neural Networks

The problem of explaining deep learning models, and model predictions ge...
10/22/2021

Mechanistic Interpretation of Machine Learning Inference: A Fuzzy Feature Importance Fusion Approach

With the widespread use of machine learning to support decision-making, ...
04/13/2021

LioNets: A Neural-Specific Local Interpretation Technique Exploiting Penultimate Layer Information

Artificial Intelligence (AI) has a tremendous impact on the unexpected g...
10/06/2021

Shapley variable importance clouds for interpretable machine learning

Interpretable machine learning has been focusing on explaining final mod...