Easy Adaptation to Mitigate Gender Bias in Multilingual Text Classification

04/12/2022
by   Xiaolei Huang, et al.
0

Existing approaches to mitigate demographic biases evaluate on monolingual data, however, multilingual data has not been examined. In this work, we treat the gender as domains (e.g., male vs. female) and present a standard domain adaptation model to reduce the gender bias and improve performance of text classifiers under multilingual settings. We evaluate our approach on two text classification tasks, hate speech detection and rating prediction, and demonstrate the effectiveness of our approach with three fair-aware baselines.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/19/2022

Mono vs Multilingual BERT for Hate Speech Detection and Text Classification: A Case Study in Marathi

Transformers are the most eminent architectures used for a vast range of...
research
02/24/2020

Multilingual Twitter Corpus and Baselines for Evaluating Demographic Bias in Hate Speech Recognition

Existing research on fairness evaluation of document classification mode...
research
03/28/2023

Model and Evaluation: Towards Fairness in Multilingual Text Classification

Recently, more and more research has focused on addressing bias in text ...
research
02/01/2019

tax2vec: Constructing Interpretable Features from Taxonomies for Short Text Classification

The use of background knowledge remains largely unexploited in many text...
research
09/11/2018

Multilingual Cross-domain Perspectives on Online Hate Speech

In this report, we present a study of eight corpora of online hate speec...
research
03/16/2021

Balancing Biases and Preserving Privacy on Balanced Faces in the Wild

There are demographic biases in the SOTA CNN used for FR. Our BFW datase...
research
10/14/2022

Controlling Bias Exposure for Fair Interpretable Predictions

Recent work on reducing bias in NLP models usually focuses on protecting...

Please sign up or login with your details

Forgot password? Click here to reset