Fighting the disagreement in Explainable Machine Learning with consensus

Machine learning (ML) models are often valued by the accuracy of their predictions. However, in some areas of science, the inner workings of models are as relevant as their accuracy. To understand how ML models work internally, the use of interpretability algorithms is the preferred option. Unfortunately, despite the diversity of algorithms available, they often disagree in explaining a model, leading to contradictory explanations. To cope with this issue, consensus functions can be applied once the models have been explained. Nevertheless, the problem is not completely solved because the final result will depend on the selected consensus function and other factors. In this paper, six consensus functions have been evaluated for the explanation of five ML models. The models were previously trained on four synthetic datasets whose internal rules were known in advance. The models were then explained with model-agnostic local and global interpretability algorithms. Finally, consensus was calculated with six different functions, including one developed by the authors. The results demonstrated that the proposed function is fairer than the others and provides more consistent and accurate explanations.

READ FULL TEXT

page 14

page 15

page 16

page 17

page 18

research
10/10/2022

Local Interpretable Model Agnostic Shap Explanations for machine learning models

With the advancement of technology for artificial intelligence (AI) base...
research
09/02/2021

Cross-Model Consensus of Explanations and Beyond for Image Classification Models: An Empirical Study

Existing interpretation algorithms have found that, even deep models mak...
research
06/16/2016

Model-Agnostic Interpretability of Machine Learning

Understanding why machine learning models behave the way they do empower...
research
09/21/2020

Survey of explainable machine learning with visual and granular methods beyond quasi-explanations

This paper surveys visual methods of explainability of Machine Learning ...
research
10/06/2009

BRAINSTORMING: Consensus Learning in Practice

We present here an introduction to Brainstorming approach, that was rece...
research
06/10/2020

OptiLIME: Optimized LIME Explanations for Diagnostic Computer Algorithms

Local Interpretable Model-Agnostic Explanations (LIME) is a popular meth...
research
04/12/2023

Preemptively Pruning Clever-Hans Strategies in Deep Neural Networks

Explainable AI has become a popular tool for validating machine learning...

Please sign up or login with your details

Forgot password? Click here to reset