Politeness Stereotypes and Attack Vectors: Gender Stereotypes in Japanese and Korean Language Models

06/16/2023
by   Victor Steinborn, et al.
0

In efforts to keep up with the rapid progress and use of large language models, gender bias research is becoming more prevalent in NLP. Non-English bias research, however, is still in its infancy with most work focusing on English. In our work, we study how grammatical gender bias relating to politeness levels manifests in Japanese and Korean language models. Linguistic studies in these languages have identified a connection between gender bias and politeness levels, however it is not yet known if language models reproduce these biases. We analyze relative prediction probabilities of the male and female grammatical genders using templates and find that informal polite speech is most indicative of the female grammatical gender, while rude and formal speech is most indicative of the male grammatical gender. Further, we find politeness levels to be an attack vector for allocational gender bias in cyberbullying detection models. Cyberbullies can evade detection through simple techniques abusing politeness levels. We introduce an attack dataset to (i) identify representational gender bias across politeness levels, (ii) demonstrate how gender biases can be abused to bypass cyberbullying detection models and (iii) show that allocational biases can be mitigated via training on our proposed dataset. Through our findings we highlight the importance of bias research moving beyond its current English-centrism.

READ FULL TEXT

page 6

page 7

research
04/12/2023

Measuring Gender Bias in West Slavic Language Models

Pre-trained language models have been known to perpetuate biases from th...
research
09/13/2023

In-Contextual Bias Suppression for Large Language Models

Despite their impressive performance in a wide range of NLP tasks, Large...
research
07/21/2022

The Birth of Bias: A case study on the evolution of gender bias in an English language model

Detecting and mitigating harmful biases in modern language models are wi...
research
05/25/2023

Emergence of a phonological bias in ChatGPT

Current large language models, such as OpenAI's ChatGPT, have captured t...
research
05/13/2022

Analyzing Hate Speech Data along Racial, Gender and Intersectional Axes

To tackle the rising phenomenon of hate speech, efforts have been made t...
research
12/22/2021

Quantifying Gender Biases Towards Politicians on Reddit

Despite attempts to increase gender parity in politics, global efforts h...
research
10/16/2021

ASR4REAL: An extended benchmark for speech models

Popular ASR benchmarks such as Librispeech and Switchboard are limited i...

Please sign up or login with your details

Forgot password? Click here to reset