Model Agnostic Multilevel Explanations

In recent years, post-hoc local instance-level and global dataset-level explainability of black-box models has received a lot of attention. Much less attention has been given to obtaining insights at intermediate or group levels, which is a need outlined in recent works that study the challenges in realizing the guidelines in the General Data Protection Regulation (GDPR). In this paper, we propose a meta-method that, given a typical local explainability method, can build a multilevel explanation tree. The leaves of this tree correspond to the local explanations, the root corresponds to the global explanation, and intermediate levels correspond to explanations for groups of data points that it automatically clusters. The method can also leverage side information, where users can specify points for which they may want the explanations to be similar. We argue that such a multilevel structure can also be an effective form of communication, where one could obtain few explanations that characterize the entire dataset by considering an appropriate level in our explanation tree. Explanations for novel test points can be cost-efficiently obtained by associating them with the closest training points. When the local explainability technique is generalized additive (viz. LIME, GAMs), we develop a fast approximate algorithm for building the multilevel tree and study its convergence behavior. We validate the effectiveness of the proposed technique based on two human studies – one with experts and the other with non-expert users – on real world datasets, and show that we produce high fidelity sparse explanations on several other public datasets.

READ FULL TEXT
research
06/14/2021

Characterizing the risk of fairwashing

Fairwashing refers to the risk that an unfair black-box model can be exp...
research
03/21/2023

Do intermediate feature coalitions aid explainability of black-box models?

This work introduces the notion of intermediate concepts based on levels...
research
03/27/2019

iBreakDown: Uncertainty of Model Explanations for Non-additive Predictive Models

Explainable Artificial Intelligence (XAI) brings a lot of attention rece...
research
01/28/2022

Locally Invariant Explanations: Towards Stable and Unidirectional Explanations through Local Invariant Learning

Locally interpretable model agnostic explanations (LIME) method is one o...
research
02/20/2023

Why is the prediction wrong? Towards underfitting case explanation via meta-classification

In this paper we present a heuristic method to provide individual explan...
research
01/06/2022

Topological Representations of Local Explanations

Local explainability methods – those which seek to generate an explanati...
research
07/14/2022

Beware the Rationalization Trap! When Language Model Explainability Diverges from our Mental Models of Language

Language models learn and represent language differently than humans; th...

Please sign up or login with your details

Forgot password? Click here to reset