Data-driven Model Generalizability in Crosslinguistic Low-resource Morphological Segmentation

01/05/2022
by   Zoey Liu, et al.
9

Common designs of model evaluation typically focus on monolingual settings, where different models are compared according to their performance on a single data set that is assumed to be representative of all possible data for the task at hand. While this may be reasonable for a large data set, this assumption is difficult to maintain in low-resource scenarios, where artifacts of the data collection can yield data sets that are outliers, potentially making conclusions about model performance coincidental. To address these concerns, we investigate model generalizability in crosslinguistic low-resource scenarios. Using morphological segmentation as the test case, we compare three broad classes of models with different parameterizations, taking data from 11 languages across 6 language families. In each experimental setting, we evaluate all models on a first data set, then examine their performance consistency when introducing new randomly sampled data sets with the same size and when applying the trained models to unseen test sets of varying sizes. The results demonstrate that the extent of model generalization depends on the characteristics of the data set, and does not necessarily rely heavily on the data set size. Among the characteristics that we studied, the ratio of morpheme overlap and that of the average number of morphemes per word between the training and test sets are the two most prominent factors. Our findings suggest that future work should adopt random sampling to construct data sets with different sizes in order to make more responsible claims about model evaluation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/06/2020

Tackling the Low-resource Challenge for Canonical Segmentation

Canonical morphological segmentation consists of dividing words into the...
research
05/25/2023

Morphological Inflection: A Reality Check

Morphological inflection is a popular task in sub-word NLP with both pra...
research
02/16/2021

Recommending Training Set Sizes for Classification

Based on a comprehensive study of 20 established data sets, we recommend...
research
03/29/2021

Text Normalization for Low-Resource Languages of Africa

Training data for machine learning models can come from many different s...
research
08/11/2020

A parallel evaluation data set of software documentation with document structure annotation

This paper accompanies the software documentation data set for machine t...
research
10/25/2022

Measuring uncertainty when pooling interval-censored data sets with different precision

Data quality is an important consideration in many engineering applicati...
research
12/30/2019

An Empirical Study of Factors Affecting Language-Independent Models

Scaling existing applications and solutions to multiple human languages ...

Please sign up or login with your details

Forgot password? Click here to reset