Neural Polysynthetic Language Modelling

05/11/2020
by   Lane Schwartz, et al.
0

Research in natural language processing commonly assumes that approaches that work well for English and and other widely-used languages are "language agnostic". In high-resource languages, especially those that are analytic, a common approach is to treat morphologically-distinct variants of a common root as completely independent word types. This assumes, that there are limited morphological inflections per root, and that the majority will appear in a large enough corpus, so that the model can adequately learn statistics about each form. Approaches like stemming, lemmatization, or subword segmentation are often used when either of those assumptions do not hold, particularly in the case of synthetic languages like Spanish or Russian that have more inflection than English. In the literature, languages like Finnish or Turkish are held up as extreme examples of complexity that challenge common modelling assumptions. Yet, when considering all of the world's languages, Finnish and Turkish are closer to the average case. When we consider polysynthetic languages (those at the extreme of morphological complexity), approaches like stemming, lemmatization, or subword modelling may not suffice. These languages have very high numbers of hapax legomena, showing the need for appropriate morphological handling of words, without which it is not possible for a model to capture enough word statistics. We examine the current state-of-the-art in language modelling, machine translation, and text prediction for four polysynthetic languages: Guaraní, St. Lawrence Island Yupik, Central Alaskan Yupik, and Inuktitut. We then propose a novel framework for language modelling that combines knowledge representations from finite-state morphological analyzers with Tensor Product Representations in order to enable neural language models capable of handling the full range of typologically variant languages.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/12/2022

Subword Segmental Language Modelling for Nguni Languages

Subwords have become the standard units of text in NLP, enabling efficie...
research
01/12/2020

Urdu-English Machine Transliteration using Neural Networks

Machine translation has gained much attention in recent years. It is a s...
research
08/18/2015

Probabilistic Modelling of Morphologically Rich Languages

This thesis investigates how the sub-structure of words can be accounted...
research
07/27/2019

Nefnir: A high accuracy lemmatizer for Icelandic

Lemmatization, finding the basic morphological form of a word in a corpu...
research
02/07/2017

Fixing the Infix: Unsupervised Discovery of Root-and-Pattern Morphology

We present an unsupervised and language-agnostic method for learning roo...
research
06/11/2019

What Kind of Language Is Hard to Language-Model?

How language-agnostic are current state-of-the-art NLP tools? Are there ...
research
01/23/2020

Traduction des Grammaires Catégorielles de Lambek dans les Grammaires Catégorielles Abstraites

Lambek Grammars (LG) are a computational modelling of natural language, ...

Please sign up or login with your details

Forgot password? Click here to reset