Data Augmentation for Context-Sensitive Neural Lemmatization Using Inflection Tables and Raw Text

04/02/2019
by   Toms Bergmanis, et al.
0

Lemmatization aims to reduce the sparse data problem by relating the inflected forms of a word to its dictionary form. Using context can help, both for unseen and ambiguous words. Yet most context-sensitive approaches require full lemma-annotated sentences for training, which may be scarce or unavailable in low-resource languages. In addition (as shown here), in a low-resource setting, a lemmatizer can learn more from n labeled examples of distinct words (types) than from n (contiguous) labeled tokens, since the latter contain far fewer distinct types. To combine the efficiency of type-based learning with the benefits of context, we propose a way to train a context-sensitive lemmatizer with little or no labeled corpus data, using inflection tables from the UniMorph project and raw text examples from Wikipedia that provide sentence contexts for the unambiguous UniMorph examples. Despite these being unambiguous examples, the model successfully generalizes from them, leading to improved results (both overall, and especially on unseen words) in comparison to a baseline that does not use context.

READ FULL TEXT
research
05/16/2023

AdversarialWord Dilution as Text Data Augmentation in Low-Resource Regime

Data augmentation is widely used in text classification, especially in t...
research
07/02/2018

Training a Neural Network in a Low-Resource Setting on Automatically Annotated Noisy Data

Manually labeled corpora are expensive to create and often not available...
research
12/10/2018

Low Resource Multi-modal Data Augmentation for End-to-end ASR

We explore training attention-based encoder-decoder ASR for low-resource...
research
09/08/2021

Rethinking Data Augmentation for Low-Resource Neural Machine Translation: A Multi-Task Learning Approach

In the context of neural machine translation, data augmentation (DA) tec...
research
05/18/2022

Data Augmentation to Address Out-of-Vocabulary Problem in Low-Resource Sinhala-English Neural Machine Translation

Out-of-Vocabulary (OOV) is a problem for Neural Machine Translation (NMT...
research
10/16/2021

Good Examples Make A Faster Learner: Simple Demonstration-based Learning for Low-resource NER

Recent advances in prompt-based learning have shown impressive results o...
research
05/12/2018

Sharp Nearby, Fuzzy Far Away: How Neural Language Models Use Context

We know very little about how neural language models (LM) use prior ling...

Please sign up or login with your details

Forgot password? Click here to reset