Unlabeled Data for Morphological Generation With Character-Based Sequence-to-Sequence Models

05/17/2017
by   Katharina Kann, et al.
0

We present a semi-supervised way of training a character-based encoder-decoder recurrent neural network for morphological reinflection, the task of generating one inflected word form from another. This is achieved by using unlabeled tokens or random strings as training data for an autoencoding task, adapting a network for morphological reinflection, and performing multi-task training. We thus use limited labeled data more effectively, obtaining up to 9.9 different languages.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset