Towards a More Reliable Interpretation of Machine Learning Outputs for Safety-Critical Systems using Feature Importance Fusion

09/11/2020
by   Divish Rengasamy, et al.
10

When machine learning supports decision-making in safety-critical systems, it is important to verify and understand the reasons why a particular output is produced. Although feature importance calculation approaches assist in interpretation, there is a lack of consensus regarding how features' importance is quantified, which makes the explanations offered for the outcomes mostly unreliable. A possible solution to address the lack of agreement is to combine the results from multiple feature importance quantifiers to reduce the variance of estimates. Our hypothesis is that this will lead to more robust and trustworthy interpretations of the contribution of each feature to machine learning predictions. To assist test this hypothesis, we propose an extensible Framework divided in four main parts: (i) traditional data pre-processing and preparation for predictive machine learning models; (ii) predictive machine learning; (iii) feature importance quantification and (iv) feature importance decision fusion using an ensemble strategy. We also introduce a novel fusion metric and compare it to the state-of-the-art. Our approach is tested on synthetic data, where the ground truth is known. We compare different fusion approaches and their results for both training and test sets. We also investigate how different characteristics within the datasets affect the feature importance ensembles studied. Results show that our feature importance ensemble Framework overall produces 15 to existing methods. Additionally, results reveal that different levels of noise in the datasets do not affect the feature importance ensembles' ability to accurately quantify feature importance, whereas the feature importance quantification error increases with the number of features and number of orthogonal informative features.

READ FULL TEXT

page 35

page 37

page 41

research
10/22/2021

Mechanistic Interpretation of Machine Learning Inference: A Fuzzy Feature Importance Fusion Approach

With the widespread use of machine learning to support decision-making, ...
research
08/08/2022

EFI: A Toolbox for Feature Importance Fusion and Interpretation in Python

This paper presents an open-source Python toolbox called Ensemble Featur...
research
10/27/2019

CXPlain: Causal Explanations for Model Interpretation under Uncertainty

Feature importance estimates that inform users about the degree to which...
research
10/15/2020

Altruist: Argumentative Explanations through Local Interpretations of Predictive Models

Interpretable machine learning is an emerging field providing solutions ...
research
07/04/2022

Comparing Feature Importance and Rule Extraction for Interpretability on Text Data

Complex machine learning algorithms are used more and more often in crit...
research
04/29/2022

A Framework for Constructing Machine Learning Models with Feature Set Optimisation for Evapotranspiration Partitioning

A deeper understanding of the drivers of evapotranspiration and the mode...
research
09/03/2021

Relating the Partial Dependence Plot and Permutation Feature Importance to the Data Generating Process

Scientists and practitioners increasingly rely on machine learning to mo...

Please sign up or login with your details

Forgot password? Click here to reset