Verifiable Feature Attributions: A Bridge between Post Hoc Explainability and Inherent Interpretability

07/27/2023
by   Usha Bhalla, et al.
0

With the increased deployment of machine learning models in various real-world applications, researchers and practitioners alike have emphasized the need for explanations of model behaviour. To this end, two broad strategies have been outlined in prior literature to explain models. Post hoc explanation methods explain the behaviour of complex black-box models by highlighting features that are critical to model predictions; however, prior work has shown that these explanations may not be faithful, and even more concerning is our inability to verify them. Specifically, it is nontrivial to evaluate if a given attribution is correct with respect to the underlying model. Inherently interpretable models, on the other hand, circumvent these issues by explicitly encoding explanations into model architecture, meaning their explanations are naturally faithful and verifiable, but they often exhibit poor predictive performance due to their limited expressive power. In this work, we aim to bridge the gap between the aforementioned strategies by proposing Verifiability Tuning (VerT), a method that transforms black-box models into models that naturally yield faithful and verifiable feature attributions. We begin by introducing a formal theoretical framework to understand verifiability and show that attributions produced by standard models cannot be verified. We then leverage this framework to propose a method to build verifiable models and feature attributions out of fully trained black-box models. Finally, we perform extensive experiments on semi-synthetic and real-world datasets, and show that VerT produces models that (1) yield explanations that are correct and verifiable and (2) are faithful to the original black-box models they are meant to explain.

READ FULL TEXT

page 2

page 7

page 9

research
06/14/2021

Characterizing the risk of fairwashing

Fairwashing refers to the risk that an unfair black-box model can be exp...
research
09/03/2020

Model extraction from counterfactual explanations

Post-hoc explanation techniques refer to a posteriori methods that can b...
research
10/04/2019

Can I Trust the Explainer? Verifying Post-hoc Explanatory Methods

For AI systems to garner widespread public acceptance, we must develop m...
research
03/03/2022

Label-Free Explainability for Unsupervised Models

Unsupervised black-box models are challenging to interpret. Indeed, most...
research
02/26/2021

PredDiff: Explanations and Interactions from Conditional Expectations

PredDiff is a model-agnostic, local attribution method that is firmly ro...
research
03/31/2022

Interpretation of Black Box NLP Models: A Survey

An increasing number of machine learning models have been deployed in do...
research
06/11/2022

Enhancing Explainability of Hyperparameter Optimization via Bayesian Algorithm Execution

Despite all the benefits of automated hyperparameter optimization (HPO),...

Please sign up or login with your details

Forgot password? Click here to reset