HERB: Measuring Hierarchical Regional Bias in Pre-trained Language Models

11/05/2022
by   Yizhi Li, et al.
0

Fairness has become a trending topic in natural language processing (NLP), which addresses biases targeting certain social groups such as genders and religions. However, regional bias in language models (LMs), a long-standing global discrimination problem, still remains unexplored. This paper bridges the gap by analysing the regional bias learned by the pre-trained language models that are broadly used in NLP tasks. In addition to verifying the existence of regional bias in LMs, we find that the biases on regional groups can be strongly influenced by the geographical clustering of the groups. We accordingly propose a HiErarchical Regional Bias evaluation method (HERB) utilising the information from the sub-region clusters to quantify the bias in pre-trained LMs. Experiments show that our hierarchical metric can effectively evaluate the regional bias with respect to comprehensive topics and measure the potential regional bias that can be propagated to downstream tasks. Our codes are available at https://github.com/Bernard-Yang/HERB.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/21/2022

Towards an Enhanced Understanding of Bias in Pre-trained Neural Language Models: A Survey with Special Emphasis on Affective Bias

The remarkable progress in Natural Language Processing (NLP) brought abo...
research
03/25/2021

Equality before the Law: Legal Judgment Consistency Analysis for Fairness

In a legal system, judgment consistency is regarded as one of the most i...
research
10/06/2020

On the Branching Bias of Syntax Extracted from Pre-trained Language Models

Many efforts have been devoted to extracting constituency trees from pre...
research
10/16/2021

An Empirical Survey of the Effectiveness of Debiasing Techniques for Pre-Trained Language Models

Recent work has shown that pre-trained language models capture social bi...
research
05/25/2022

Perturbation Augmentation for Fairer NLP

Unwanted and often harmful social biases are becoming ever more salient ...
research
05/24/2023

Uncovering and Quantifying Social Biases in Code Generation

With the popularity of automatic code generation tools, such as Copilot,...
research
04/08/2022

Fair and Argumentative Language Modeling for Computational Argumentation

Although much work in NLP has focused on measuring and mitigating stereo...

Please sign up or login with your details

Forgot password? Click here to reset