Morphological Inflection Generation with Hard Monotonic Attention

11/04/2016
by   Roee Aharoni, et al.
0

We present a neural model for morphological inflection generation which employs a hard attention mechanism, inspired by the nearly-monotonic alignment commonly found between the characters in a word and the characters in its inflection. We evaluate the model on three previously studied morphological inflection generation datasets and show that it provides state of the art results in various setups compared to previous neural and non-neural approaches. Finally we present an analysis of the continuous representations learned by both the hard and soft attention bahdanauCB14 models for the task, shedding some light on the features such models extract.

READ FULL TEXT
research
05/15/2019

Exact Hard Monotonic Attention for Character-Level Transduction

Many common character-level, string-to-string transduction tasks, e.g., ...
research
08/29/2018

Hard Non-Monotonic Attention for Character-Level Transduction

Character-level string-to-string transduction is an important component ...
research
11/16/2022

Neural Unsupervised Reconstruction of Protolanguage Word Forms

We present a state-of-the-art neural approach to the unsupervised recons...
research
07/05/2017

Align and Copy: UZH at SIGMORPHON 2017 Shared Task for Morphological Reinflection

This paper presents the submissions by the University of Zurich to the S...
research
10/16/2018

Neural Morphological Tagging for Estonian

We develop neural morphological tagging and disambiguation models for Es...
research
09/26/2019

Monotonic Multihead Attention

Simultaneous machine translation models start generating a target sequen...
research
03/30/2021

A study of latent monotonic attention variants

End-to-end models reach state-of-the-art performance for speech recognit...

Please sign up or login with your details

Forgot password? Click here to reset