On the Effectiveness of Dataset Embeddings in Mono-lingual,Multi-lingual and Zero-shot Conditions

03/01/2021
by   Rob van der Goot, et al.
0

Recent complementary strands of research have shown that leveraging information on the data source through encoding their properties into embeddings can lead to performance increase when training a single model on heterogeneous data sources. However, it remains unclear in which situations these dataset embeddings are most effective, because they are used in a large variety of settings, languages and tasks. Furthermore, it is usually assumed that gold information on the data source is available, and that the test data is from a distribution seen during training. In this work, we compare the effect of dataset embeddings in mono-lingual settings, multi-lingual settings, and with predicted data source label in a zero-shot setting. We evaluate on three morphosyntactic tasks: morphological tagging, lemmatization, and dependency parsing, and use 104 datasets, 66 languages, and two different dataset grouping strategies. Performance increases are highest when the datasets are of the same language, and we know from which distribution the test-instance is drawn. In contrast, for setups where the data is from an unseen distribution, performance increase vanishes.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/16/2021

Substructure Distribution Projection for Zero-Shot Cross-Lingual Dependency Parsing

We present substructure distribution projection (SubDP), a technique tha...
research
08/03/2022

Benchmarking zero-shot and few-shot approaches for tokenization, tagging, and dependency parsing of Tagalog text

The grammatical analysis of texts in any human language typically involv...
research
12/07/2021

Parsing with Pretrained Language Models, Multiple Datasets, and Dataset Embeddings

With an increase of dataset availability, the potential for learning fro...
research
06/11/2020

CoSDA-ML: Multi-Lingual Code-Switching Data Augmentation for Zero-Shot Cross-Lingual NLP

Multi-lingual contextualized embeddings, such as multilingual-BERT (mBER...
research
09/10/2021

Genre as Weak Supervision for Cross-lingual Dependency Parsing

Recent work has shown that monolingual masked language models learn to r...
research
12/14/2022

Evaluating Byte and Wordpiece Level Models for Massively Multilingual Semantic Parsing

Token free approaches have been successfully applied to a series of word...
research
08/19/2021

Mr. TyDi: A Multi-lingual Benchmark for Dense Retrieval

We present Mr. TyDi, a multi-lingual benchmark dataset for mono-lingual ...

Please sign up or login with your details

Forgot password? Click here to reset