Fairness-aware Summarization for Justified Decision-Making

07/13/2021
by   Moniba Keymanesh, et al.
0

In many applications such as recidivism prediction, facility inspection, and benefit assignment, it's important for individuals to know the decision-relevant information for the model's prediction. In addition, the model's predictions should be fairly justified. Essentially, decision-relevant features should provide sufficient information for the predicted outcome and should be independent of the membership of individuals in protected groups such as race and gender. In this work, we focus on the problem of (un)fairness in the justification of the text-based neural models. We tie the explanatory power of the model to fairness in the outcome and propose a fairness-aware summarization mechanism to detect and counteract the bias in such models. Given a potentially biased natural language explanation for a decision, we use a multi-task neural model and an attribution mechanism based on integrated gradients to extract the high-utility and discrimination-free justifications in the form of a summary. The extracted summary is then used for training a model to make decisions for individuals. Results on several real-world datasets suggests that our method: (i) assists users to understand what information is used for the model's decision and (ii) enhances the fairness in outcomes while significantly reducing the demographic leakage.

READ FULL TEXT
research
06/01/2021

Information Theoretic Measures for Fairness-aware Feature Selection

Machine learning algorithms are increasingly used for consequential deci...
research
07/03/2021

The Price of Diversity

Systemic bias with respect to gender, race and ethnicity, often unconsci...
research
05/21/2020

Principal Fairness for Human and Algorithmic Decision-Making

Using the concept of principal stratification from the causal inference ...
research
06/08/2023

Causal Fairness for Outcome Control

As society transitions towards an AI-based decision-making infrastructur...
research
02/12/2023

Multi-dimensional discrimination in Law and Machine Learning – A comparative overview

AI-driven decision-making can lead to discrimination against certain ind...
research
05/31/2021

BiasRV: Uncovering Biased Sentiment Predictions at Runtime

Sentiment analysis (SA) systems, though widely applied in many domains, ...
research
11/27/2018

Fairness Under Unawareness: Assessing Disparity When Protected Class Is Unobserved

Assessing the fairness of a decision making system with respect to a pro...

Please sign up or login with your details

Forgot password? Click here to reset