Mechanistic Interpretation of Machine Learning Inference: A Fuzzy Feature Importance Fusion Approach

10/22/2021
by   Divish Rengasamy, et al.
0

With the widespread use of machine learning to support decision-making, it is increasingly important to verify and understand the reasons why a particular output is produced. Although post-training feature importance approaches assist this interpretation, there is an overall lack of consensus regarding how feature importance should be quantified, making explanations of model predictions unreliable. In addition, many of these explanations depend on the specific machine learning approach employed and on the subset of data used when calculating feature importance. A possible solution to improve the reliability of explanations is to combine results from multiple feature importance quantifiers from different machine learning approaches coupled with re-sampling. Current state-of-the-art ensemble feature importance fusion uses crisp techniques to fuse results from different approaches. There is, however, significant loss of information as these approaches are not context-aware and reduce several quantifiers to a single crisp output. More importantly, their representation of 'importance' as coefficients is misleading and incomprehensible to end-users and decision makers. Here we show how the use of fuzzy data fusion methods can overcome some of the important limitations of crisp fusion methods.

READ FULL TEXT

page 1

page 4

page 5

page 7

page 8

research
09/11/2020

Towards a More Reliable Interpretation of Machine Learning Outputs for Safety-Critical Systems using Feature Importance Fusion

When machine learning supports decision-making in safety-critical system...
research
08/08/2022

EFI: A Toolbox for Feature Importance Fusion and Interpretation in Python

This paper presents an open-source Python toolbox called Ensemble Featur...
research
09/15/2023

Can Users Correctly Interpret Machine Learning Explanations and Simultaneously Identify Their Limitations?

Automated decision-making systems are becoming increasingly ubiquitous, ...
research
10/26/2021

Partial order: Finding Consensus among Uncertain Feature Attributions

Post-hoc feature importance is progressively being employed to explain d...
research
09/12/2022

Model interpretation using improved local regression with variable importance

A fundamental question on the use of ML models concerns the explanation ...
research
07/04/2022

Comparing Feature Importance and Rule Extraction for Interpretability on Text Data

Complex machine learning algorithms are used more and more often in crit...
research
01/06/2021

Predicting Illness for a Sustainable Dairy Agriculture: Predicting and Explaining the Onset of Mastitis in Dairy Cows

Mastitis is a billion dollar health problem for the modern dairy industr...

Please sign up or login with your details

Forgot password? Click here to reset