Q-FIT: The Quantifiable Feature Importance Technique for Explainable Machine Learning

10/26/2020
by   Kamil Adamczewski, et al.
0

We introduce a novel framework to quantify the importance of each input feature for model explainability. A user of our framework can choose between two modes: (a) global explanation: providing feature importance globally across all the data points; and (b) local explanation: providing feature importance locally for each individual data point. The core idea of our method comes from utilizing the Dirichlet distribution to define a distribution over the importance of input features. This particular distribution is useful in ranking the importance of the input features as a sample from this distribution is a probability vector (i.e., the vector components sum to 1), Thus, the ranking uncovered by our framework which provides a quantifiable explanation of how significant each input feature is to a model's output. This quantifiable explainability differentiates our method from existing feature-selection methods, which simply determine whether a feature is relevant or not. Furthermore, a distribution over the explanation allows to define a closed-form divergence to measure the similarity between learned feature importance under different models. We use this divergence to study how the feature importance trade-offs with essential notions in modern machine learning, such as privacy and fairness. We show the effectiveness of our method on a variety of synthetic and real datasets, taking into account both tabular and image datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/30/2021

On the Trustworthiness of Tree Ensemble Explainability Methods

The recent increase in the deployment of machine learning models in crit...
research
11/23/2021

Is Shapley Explanation for a model unique?

Shapley value has recently become a popular way to explain the predictio...
research
08/05/2022

Parameter Averaging for Robust Explainability

Neural Networks are known to be sensitive to initialisation. The explana...
research
07/19/2023

Beyond Single-Feature Importance with ICECREAM

Which set of features was responsible for a certain output of a machine ...
research
09/06/2020

Active Learning++: Incorporating Annotator's Rationale using Local Model Explanation

We propose a new active learning (AL) framework, Active Learning++, whic...
research
09/28/2022

Variance Tolerance Factors For Interpreting Neural Networks

Black box models only provide results for deep learning tasks and lack i...

Please sign up or login with your details

Forgot password? Click here to reset