A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages

06/11/2020
by   Pedro Ortiz Suárez, et al.
0

We use the multilingual OSCAR corpus, extracted from Common Crawl via language classification, filtering and cleaning, to train monolingual contextualized word embeddings (ELMo) for several mid-resource languages. We then compare the performance of OSCAR-based and Wikipedia-based ELMo embeddings for these languages on the part-of-speech tagging and parsing tasks. We show that, despite the noise in the Common-Crawl-based OSCAR data, embeddings trained on OSCAR perform much better than monolingual embeddings trained on Wikipedia. They actually equal or improve the current state of the art in tagging and parsing for all five languages. In particular, they also improve over multilingual Wikipedia-based contextual embeddings (multilingual BERT), which almost always constitutes the previous state of the art, thereby showing that the benefit of a larger, more diverse corpus surpasses the cross-lingual benefit of multilingual embedding architectures.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/16/2019

Meemi: Finding the Middle Ground in Cross-lingual Word Embeddings

Word embeddings have become a standard resource in the toolset of any Na...
research
07/21/2021

Debiasing Multilingual Word Embeddings: A Case Study of Three Indian Languages

In this paper, we advance the current state-of-the-art method for debias...
research
02/06/2020

Irony Detection in a Multilingual Context

This paper proposes the first multilingual (French, English and Arabic) ...
research
02/05/2016

Massively Multilingual Word Embeddings

We introduce new methods for estimating and evaluating embeddings of wor...
research
05/19/2020

On the Choice of Auxiliary Languages for Improved Sequence Tagging

Recent work showed that embeddings from related languages can improve th...
research
04/25/2017

280 Birds with One Stone: Inducing Multilingual Taxonomies from Wikipedia using Character-level Classification

We propose a simple, yet effective, approach towards inducing multilingu...
research
09/15/2021

Comparing Feature-Engineering and Feature-Learning Approaches for Multilingual Translationese Classification

Traditional hand-crafted linguistically-informed features have often bee...

Please sign up or login with your details

Forgot password? Click here to reset