More Embeddings, Better Sequence Labelers?

09/17/2020
by   Xinyu Wang, et al.
0

Recent work proposes a family of contextual embeddings that significantly improves the accuracy of sequence labelers over non-contextual embeddings. However, there is no definite conclusion on whether we can build better sequence labelers by combining different kinds of embeddings in various settings. In this paper, we conduct extensive experiments on 3 tasks over 18 datasets and 8 languages to study the accuracy of sequence labeling with various embedding concatenations and make three observations: (1) concatenating more embedding variants leads to better accuracy in rich-resource and cross-domain settings and some conditions of low-resource settings; (2) concatenating additional contextual sub-word embeddings with contextual character embeddings hurts the accuracy in extremely low-resource settings; (3) based on the conclusion of (1), concatenating additional similar contextual embeddings cannot lead to further improvements. We hope these conclusions can help people build stronger sequence labelers in various settings.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/04/2019

Sequence Tagging with Contextual and Non-Contextual Subword Representations: A Multilingual Evaluation

Pretrained contextual and non-contextual subword embeddings have become ...
research
06/30/2021

Cross-lingual alignments of ELMo contextual embeddings

Building machine learning prediction models for a specific NLP task requ...
research
06/22/2020

Dirichlet-Smoothed Word Embeddings for Low-Resource Settings

Nowadays, classical count-based word embeddings using positive pointwise...
research
08/22/2023

Generalising sequence models for epigenome predictions with tissue and assay embeddings

Sequence modelling approaches for epigenetic profile prediction have rec...
research
06/09/2022

Predicting Embedding Reliability in Low-Resource Settings Using Corpus Similarity Measures

This paper simulates a low-resource setting across 17 languages in order...
research
09/06/2020

MIDAS at SemEval-2020 Task 10: Emphasis Selection using Label Distribution Learning and Contextual Embeddings

This paper presents our submission to the SemEval 2020 - Task 10 on emph...

Please sign up or login with your details

Forgot password? Click here to reset