DeepAI AI Chat
Log In Sign Up

Searching for Search Errors in Neural Morphological Inflection

02/16/2021
by   Martina Forster, et al.
ETH Zurich
0

Neural sequence-to-sequence models are currently the predominant choice for language generation tasks. Yet, on word-level tasks, exact inference of these models reveals the empty string is often the global optimum. Prior works have speculated this phenomenon is a result of the inadequacy of neural models for language generation. However, in the case of morphological inflection, we find that the empty string is almost never the most probable solution under the model. Further, greedy search often finds the global optimum. These observations suggest that the poor calibration of many neural models may stem from characteristics of a specific subset of tasks rather than general ill-suitedness of such models for language generation.

READ FULL TEXT

page 1

page 2

page 3

page 4

05/15/2019

Exact Hard Monotonic Attention for Character-Level Transduction

Many common character-level, string-to-string transduction tasks, e.g., ...
09/19/2018

String Transduction with Target Language Models and Insertion Handling

Many character-level tasks can be framed as sequence-to-sequence transdu...
04/01/2021

Do RNN States Encode Abstract Phonological Processes?

Sequence-to-sequence models have delivered impressive results in word fo...
02/22/2021

Evaluating Contextualized Language Models for Hungarian

We present an extended comparison of contextualized language models for ...
10/11/2019

Neural Generation for Czech: Data and Baselines

We present the first dataset targeted at end-to-end NLG in Czech in the ...
05/21/2020

Evaluating Neural Morphological Taggers for Sanskrit

Neural sequence labelling approaches have achieved state of the art resu...
05/30/2023

Back to Patterns: Efficient Japanese Morphological Analysis with Feature-Sequence Trie

Accurate neural models are much less efficient than non-neural models an...