Human-Guided Fair Classification for Natural Language Processing

12/20/2022
by   Florian E. Dorner, et al.
0

Text classifiers have promising applications in high-stake tasks such as resume screening and content moderation. These classifiers must be fair and avoid discriminatory decisions by being invariant to perturbations of sensitive attributes such as gender or ethnicity. However, there is a gap between human intuition about these perturbations and the formal similarity specifications capturing them. While existing research has started to address this gap, current methods are based on hardcoded word replacements, resulting in specifications with limited expressivity or ones that fail to fully align with human intuition (e.g., in cases of asymmetric counterfactuals). This work proposes novel methods for bridging this gap by discovering expressive and intuitive individual fairness specifications. We show how to leverage unsupervised style transfer and GPT-3's zero-shot capabilities to automatically generate expressive candidate pairs of semantically similar sentences that differ along sensitive attributes. We then validate the generated pairs via an extensive crowdsourcing study, which confirms that a lot of these pairs align with human intuition about fairness in the context of toxicity classification. Finally, we show how limited amounts of human feedback can be leveraged to learn a similarity specification that can be used to train downstream fairness-aware models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/28/2019

Learning fair predictors with Sensitive Subspace Robustness

We consider an approach to training machine learning systems that are fa...
research
06/08/2018

Blind Justice: Fairness with Encrypted Sensitive Attributes

Recent work has explored how to train machine learning models which do n...
research
09/10/2021

Style Pooling: Automatic Text Style Obfuscation for Improved Classification Fairness

Text style can reveal sensitive attributes of the author (e.g. race or a...
research
11/29/2022

Learning Antidote Data to Individual Unfairness

Fairness is an essential factor for machine learning systems deployed in...
research
08/12/2021

Syntax Matters! Syntax-Controlled in Text Style Transfer

Existing text style transfer (TST) methods rely on style classifiers to ...
research
04/10/2020

Style-transfer and Paraphrase: Looking for a Sensible Semantic Similarity Metric

The rapid development of such natural language processing tasks as style...

Please sign up or login with your details

Forgot password? Click here to reset