Fairness via Explanation Quality: Evaluating Disparities in the Quality of Post hoc Explanations

05/15/2022
by   Jessica Dai, et al.
0

As post hoc explanation methods are increasingly being leveraged to explain complex models in high-stakes settings, it becomes critical to ensure that the quality of the resulting explanations is consistently high across various population subgroups including the minority groups. For instance, it should not be the case that explanations associated with instances belonging to a particular gender subgroup (e.g., female) are less accurate than those associated with other genders. However, there is little to no research that assesses if there exist such group-based disparities in the quality of the explanations output by state-of-the-art explanation methods. In this work, we address the aforementioned gaps by initiating the study of identifying group-based disparities in explanation quality. To this end, we first outline the key properties which constitute explanation quality and where disparities can be particularly problematic. We then leverage these properties to propose a novel evaluation framework which can quantitatively measure disparities in the quality of explanations output by state-of-the-art methods. Using this framework, we carry out a rigorous empirical analysis to understand if and when group-based disparities in explanation quality arise. Our results indicate that such disparities are more likely to occur when the models being explained are complex and highly non-linear. In addition, we also observe that certain post hoc explanation methods (e.g., Integrated Gradients, SHAP) are more likely to exhibit the aforementioned disparities. To the best of our knowledge, this work is the first to highlight and study the problem of group-based disparities in explanation quality. In doing so, our work sheds light on previously unexplored ways in which explanation methods may introduce unfairness in real world decision making.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/03/2022

The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective

As various post hoc explanation methods are increasingly being leveraged...
research
06/16/2021

Towards a Rigorous Theoretical Analysis and Evaluation of GNN Explanations

As Graph Neural Networks (GNNs) are increasingly employed in real-world ...
research
06/11/2023

On Minimizing the Impact of Dataset Shifts on Actionable Explanations

The Right to Explanation is an important regulatory principle that allow...
research
06/02/2022

Which Explanation Should I Choose? A Function Approximation Perspective to Characterizing Post hoc Explanations

Despite the plethora of post hoc model explanation methods, the basic pr...
research
06/19/2018

Instance-Level Explanations for Fraud Detection: A Case Study

Fraud detection is a difficult problem that can benefit from predictive ...
research
05/05/2020

Contextualizing Hate Speech Classifiers with Post-hoc Explanation

Hate speech classifiers trained on imbalanced datasets struggle to deter...
research
10/19/2021

Coalitional Bayesian Autoencoders – Towards explainable unsupervised deep learning

This paper aims to improve the explainability of Autoencoder's (AE) pred...

Please sign up or login with your details

Forgot password? Click here to reset