Human-interpretable model explainability on high-dimensional data

10/14/2020
by   Damien de Mijolla, et al.
19

The importance of explainability in machine learning continues to grow, as both neural-network architectures and the data they model become increasingly complex. Unique challenges arise when a model's input features become high dimensional: on one hand, principled model-agnostic approaches to explainability become too computationally expensive; on the other, more efficient explainability algorithms lack natural interpretations for general users. In this work, we introduce a framework for human-interpretable explainability on high-dimensional data, consisting of two modules. First, we apply a semantically meaningful latent representation, both to reduce the raw dimensionality of the data, and to ensure its human interpretability. These latent features can be learnt, e.g. explicitly as disentangled representations or implicitly through image-to-image translation, or they can be based on any computable quantities the user chooses. Second, we adapt the Shapley paradigm for model-agnostic explainability to operate on these latent features. This leads to interpretable model explanations that are both theoretically controlled and computationally tractable. We benchmark our approach on synthetic data and demonstrate its effectiveness on several image-classification tasks.

READ FULL TEXT

page 5

page 7

page 8

page 13

page 14

page 15

page 16

research
06/01/2020

Shapley-based explainability on the data manifold

Explainability in machine learning is crucial for iterative model develo...
research
06/14/2020

Explaining Predictions by Approximating the Local Decision Boundary

Constructing accurate model-agnostic explanations for opaque machine lea...
research
05/17/2021

Algorithm-Agnostic Explainability for Unsupervised Clustering

Supervised machine learning explainability has greatly expanded in recen...
research
11/27/2022

Latent SHAP: Toward Practical Human-Interpretable Explanations

Model agnostic feature attribution algorithms (such as SHAP and LIME) ar...
research
02/28/2021

Model-Agnostic Explainability for Visual Search

What makes two images similar? We propose new approaches to generate mod...
research
07/20/2020

Towards Ground Truth Explainability on Tabular Data

In data science, there is a long history of using synthetic data for met...
research
03/25/2021

Interpretable Approximation of High-Dimensional Data

In this paper we apply the previously introduced approximation method ba...

Please sign up or login with your details

Forgot password? Click here to reset