Better Character Language Modeling Through Morphology

by   Terra Blevins, et al.
University of Washington

We incorporate morphological supervision into character language models (CLMs) via multitasking and show that this addition improves bits-per-character (BPC) performance across 24 languages, even when the morphology data and language modeling data are disjoint. Analyzing the CLMs shows that inflected words benefit more from explicitly modeling morphology than uninflected words, and that morphological supervision improves performance even as the amount of language modeling data grows. We then transfer morphological supervision across languages to improve language modeling performance in the low-resource setting.


page 1

page 2

page 3

page 4


What do character-level models learn about morphology? The case of dependency parsing

When parsing morphologically-rich languages with neural models, it is be...

Morphology Matters: A Multilingual Language Modeling Analysis

Prior studies in multilingual language modeling (e.g., Cotterell et al.,...

From Characters to Words to in Between: Do We Capture Morphology?

Words can be represented by composing the representations of subword uni...

On the Diachronic Stability of Irregularity in Inflectional Morphology

Many languages' inflectional morphological systems are replete with irre...

An Investigation of Noise in Morphological Inflection

With a growing focus on morphological inflection systems for languages w...

Please sign up or login with your details

Forgot password? Click here to reset