DLAMA: A Framework for Curating Culturally Diverse Facts for Probing the Knowledge of Pretrained Language Models

06/08/2023
by   Amr Keleg, et al.
0

A few benchmarking datasets have been released to evaluate the factual knowledge of pretrained language models. These benchmarks (e.g., LAMA, and ParaRel) are mainly developed in English and later are translated to form new multilingual versions (e.g., mLAMA, and mParaRel). Results on these multilingual benchmarks suggest that using English prompts to recall the facts from multilingual models usually yields significantly better and more consistent performance than using non-English prompts. Our analysis shows that mLAMA is biased toward facts from Western countries, which might affect the fairness of probing models. We propose a new framework for curating factual triples from Wikidata that are culturally diverse. A new benchmark DLAMA-v1 is built of factual triples from three pairs of contrasting cultures having a total of 78,259 triples from 20 relation predicates. The three pairs comprise facts representing the (Arab and Western), (Asian and Western), and (South American and Western) countries respectively. Having a more balanced benchmark (DLAMA-v1) supports that mBERT performs better on Western facts than non-Western ones, while monolingual Arabic, English, and Korean models tend to perform better on their culturally proximate facts. Moreover, both monolingual and multilingual models tend to make a prediction that is culturally or geographically relevant to the correct label, even if the prediction is wrong.

READ FULL TEXT

page 9

page 16

page 17

research
03/22/2022

Factual Consistency of Multilingual Pretrained Language Models

Pretrained language models can be queried for factual knowledge, with po...
research
12/31/2020

How Good is Your Tokenizer? On the Monolingual Performance of Multilingual Language Models

In this work we provide a systematic empirical comparison of pretrained ...
research
02/01/2021

Multilingual LAMA: Investigating Knowledge in Multilingual Pretrained Language Models

Recently, it has been found that monolingual English language models can...
research
09/14/2023

Multilingual Audio Captioning using machine translated data

Automated Audio Captioning (AAC) systems attempt to generate a natural l...
research
06/02/2023

Knowledge of cultural moral norms in large language models

Moral norms vary across cultures. A recent line of work suggests that En...
research
09/11/2021

The Impact of Positional Encodings on Multilingual Compression

In order to preserve word-order information in a non-autoregressive sett...
research
11/28/2019

How Can We Know What Language Models Know?

Recent work has presented intriguing results examining the knowledge con...

Please sign up or login with your details

Forgot password? Click here to reset