Speaking Multiple Languages Affects the Moral Bias of Language Models

11/14/2022
by   Katharina Hämmerl, et al.
8

Pre-trained multilingual language models (PMLMs) are commonly used when dealing with data from multiple languages and cross-lingual transfer. However, PMLMs are trained on varying amounts of data for each language. In practice this means their performance is often much better on English than many other languages. We explore to what extent this also applies to moral norms. Do the models capture moral norms from English and impose them on other languages? Do the models exhibit random and thus potentially harmful beliefs in certain languages? Both these issues could negatively impact cross-lingual transfer and potentially lead to harmful outcomes. In this paper, we (1) apply the MoralDirection framework to multilingual models, comparing results in German, Czech, Arabic, Mandarin Chinese, and English, (2) analyse model behaviour on filtered parallel subtitles corpora, and (3) apply the models to a Moral Foundations Questionnaire, comparing with human responses from different countries. Our experiments demonstrate that, indeed, PMLMs encode differing moral biases, but these do not necessarily correspond to cultural differences or commonalities in human opinions.

READ FULL TEXT
research
03/18/2022

Do Multilingual Language Models Capture Differing Moral Norms?

Massively multilingual sentence representations are trained on large cor...
research
05/24/2023

This Land is Your, My Land: Evaluating Geopolitical Biases in Language Models

We introduce the notion of geopolitical bias – a tendency to report diff...
research
06/02/2023

Knowledge of cultural moral norms in large language models

Moral norms vary across cultures. A recent line of work suggests that En...
research
06/28/2023

Towards Measuring the Representation of Subjective Global Opinions in Language Models

Large language models (LLMs) may not equitably represent diverse global ...
research
07/14/2023

How Different Is Stereotypical Bias Across Languages?

Recent studies have demonstrated how to assess the stereotypical bias in...
research
06/02/2021

Lower Perplexity is Not Always Human-Like

In computational psycholinguistics, various language models have been ev...
research
03/26/2022

Metaphors in Pre-Trained Language Models: Probing and Generalization Across Datasets and Languages

Human languages are full of metaphorical expressions. Metaphors help peo...

Please sign up or login with your details

Forgot password? Click here to reset