Fairness and Explainability: Bridging the Gap Towards Fair Model Explanations

12/07/2022
by   Yuying Zhao, et al.
0

While machine learning models have achieved unprecedented success in real-world applications, they might make biased/unfair decisions for specific demographic groups and hence result in discriminative outcomes. Although research efforts have been devoted to measuring and mitigating bias, they mainly study bias from the result-oriented perspective while neglecting the bias encoded in the decision-making procedure. This results in their inability to capture procedure-oriented bias, which therefore limits the ability to have a fully debiasing method. Fortunately, with the rapid development of explainable machine learning, explanations for predictions are now available to gain insights into the procedure. In this work, we bridge the gap between fairness and explainability by presenting a novel perspective of procedure-oriented fairness based on explanations. We identify the procedure-based bias by measuring the gap of explanation quality between different groups with Ratio-based and Value-based Explanation Fairness. The new metrics further motivate us to design an optimization objective to mitigate the procedure-based bias where we observe that it will also mitigate bias from the prediction. Based on our designed optimization objective, we propose a Comprehensive Fairness Algorithm (CFA), which simultaneously fulfills multiple objectives - improving traditional fairness, satisfying explanation fairness, and maintaining the utility performance. Extensive experiments on real-world datasets demonstrate the effectiveness of our proposed CFA and highlight the importance of considering fairness from the explainability perspective. Our code is publicly available at https://github.com/YuyingZhao/FairExplanations-CFA .

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/24/2022

On Structural Explanation of Bias in Graph Neural Networks

Graph Neural Networks (GNNs) have shown satisfying performance in variou...
research
06/22/2020

Improving LIME Robustness with Smarter Locality Sampling

Explainability algorithms such as LIME have enabled machine learning sys...
research
10/14/2020

Explainability for fair machine learning

As the decisions made or influenced by machine learning models increasin...
research
03/25/2023

Fairness meets Cross-Domain Learning: a new perspective on Models and Metrics

Deep learning-based recognition systems are deployed at scale for severa...
research
11/17/2021

CONFAIR: Configurable and Interpretable Algorithmic Fairness

The rapid growth of data in the recent years has led to the development ...
research
08/10/2021

Harnessing value from data science in business: ensuring explainability and fairness of solutions

The paper introduces concepts of fairness and explainability (XAI) in ar...
research
04/12/2023

GNNUERS: Fairness Explanation in GNNs for Recommendation via Counterfactual Reasoning

In recent years, personalization research has been delving into issues o...

Please sign up or login with your details

Forgot password? Click here to reset