BabelBERT: Massively Multilingual Transformers Meet a Massively Multilingual Lexical Resource

08/01/2022
by   Tommaso Green, et al.
33

While pretrained language models (PLMs) primarily serve as general purpose text encoders that can be fine-tuned for a wide variety of downstream tasks, recent work has shown that they can also be rewired to produce high-quality word representations (i.e., static word embeddings) and yield good performance in type-level lexical tasks. While existing work primarily focused on lexical specialization of PLMs in monolingual and bilingual settings, in this work we expose massively multilingual transformers (MMTs, e.g., mBERT or XLM-R) to multilingual lexical knowledge at scale, leveraging BabelNet as the readily available rich source of multilingual and cross-lingual type-level lexical knowledge. Concretely, we leverage BabelNet's multilingual synsets to create synonym pairs across 50 languages and then subject the MMTs (mBERT and XLM-R) to a lexical specialization procedure guided by a contrastive objective. We show that such massively multilingual lexical specialization brings massive gains in two standard cross-lingual lexical tasks, bilingual lexicon induction and cross-lingual word similarity, as well as in cross-lingual sentence retrieval. Crucially, we observe gains for languages unseen in specialization, indicating that the multilingual lexical specialization enables generalization to languages with no lexical constraints. In a series of subsequent controlled experiments, we demonstrate that the pretraining quality of word representations in the MMT for languages involved in specialization has a much larger effect on performance than the linguistic diversity of the set of constraints. Encouragingly, this suggests that lexical tasks involving low-resource languages benefit the most from lexical knowledge of resource-rich languages, generally much more available.

READ FULL TEXT

page 5

page 8

research
05/02/2021

Larger-Scale Transformers for Multilingual Masked Language Modeling

Recent work has demonstrated the effectiveness of cross-lingual language...
research
05/11/2023

A General-Purpose Multilingual Document Encoder

Massively multilingual pretrained transformers (MMTs) have tremendously ...
research
10/12/2020

Probing Pretrained Language Models for Lexical Semantics

The success of large pretrained language models (LMs) such as BERT and R...
research
04/17/2021

AM2iCo: Evaluating Word Meaning in Context across Low-ResourceLanguages with Adversarial Examples

Capturing word meaning in context and distinguishing between corresponde...
research
03/09/2022

Language Diversity: Visible to Humans, Exploitable by Machines

The Universal Knowledge Core (UKC) is a large multilingual lexical datab...
research
11/21/2018

The Best of Both Worlds: Lexical Resources To Improve Low-Resource Part-of-Speech Tagging

In natural language processing, the deep learning revolution has shifted...
research
09/07/2021

Mixed Attention Transformer for Leveraging Word-Level Knowledge to Neural Cross-Lingual Information Retrieval

Pretrained contextualized representations offer great success for many d...

Please sign up or login with your details

Forgot password? Click here to reset