Sequence Tagging with Contextual and Non-Contextual Subword Representations: A Multilingual Evaluation

06/04/2019
by   Benjamin Heinzerling, et al.
0

Pretrained contextual and non-contextual subword embeddings have become available in over 250 languages, allowing massively multilingual NLP. However, while there is no dearth of pretrained embeddings, the distinct lack of systematic evaluations makes it difficult for practitioners to choose between them. In this work, we conduct an extensive evaluation comparing non-contextual subword embeddings, namely FastText and BPEmb, and a contextual representation method, namely BERT, on multilingual named entity recognition and part-of-speech tagging. We find that overall, a combination of BERT, BPEmb, and character representations works best across languages and tasks. A more detailed analysis reveals different strengths and weaknesses: Multilingual BERT performs well in medium- to high-resource languages, but is outperformed by non-contextual subword embeddings in a low-resource setting.

READ FULL TEXT
research
06/21/2019

Multilingual Named Entity Recognition Using Pretrained Embeddings, Attention Mechanism and NCRF

In this paper we tackle multilingual named entity recognition task. We u...
research
09/17/2020

More Embeddings, Better Sequence Labelers?

Recent work proposes a family of contextual embeddings that significantl...
research
11/09/2020

EstBERT: A Pretrained Language-Specific BERT for Estonian

This paper presents EstBERT, a large pretrained transformer-based langua...
research
08/31/2019

Small and Practical BERT Models for Sequence Labeling

We propose a practical scheme to train a single multilingual sequence la...
research
08/31/2019

Adversarial Learning with Contextual Embeddings for Zero-resource Cross-lingual Classification and NER

Contextual word embeddings (e.g. GPT, BERT, ELMo, etc.) have demonstrate...
research
03/05/2020

BERT as a Teacher: Contextual Embeddings for Sequence-Level Reward

Measuring the quality of a generated sequence against a set of reference...
research
01/26/2021

Deep Subjecthood: Higher-Order Grammatical Features in Multilingual BERT

We investigate how Multilingual BERT (mBERT) encodes grammar by examinin...

Please sign up or login with your details

Forgot password? Click here to reset