Multi-space Variational Encoder-Decoders for Semi-supervised Labeled Sequence Transduction

04/06/2017
by   Chunting Zhou, et al.
0

Labeled sequence transduction is a task of transforming one sequence into another sequence that satisfies desiderata specified by a set of labels. In this paper we propose multi-space variational encoder-decoders, a new model for labeled sequence transduction with semi-supervised learning. The generative model can use neural networks to handle both discrete and continuous latent variables to exploit various features of data. Experiments show that our model provides not only a powerful supervised framework but also can effectively take advantage of the unlabeled data. On the SIGMORPHON morphological inflection benchmark, our model outperforms single-model state-of-art results by a large margin for the majority of languages.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/17/2017

Unlabeled Data for Morphological Generation With Character-Based Sequence-to-Sequence Models

We present a semi-supervised way of training a character-based encoder-d...
research
05/06/2019

MixMatch: A Holistic Approach to Semi-Supervised Learning

Semi-supervised learning has proven to be a powerful paradigm for levera...
research
11/02/2020

Semi-supervised Autoencoding Projective Dependency Parsing

We describe two end-to-end autoencoding models for semi-supervised graph...
research
06/10/2018

A Structured Variational Autoencoder for Contextual Morphological Inflection

Statistical morphological inflectors are typically trained on fully supe...
research
04/16/2020

Do sequence-to-sequence VAEs learn global features of sentences?

A longstanding goal in NLP is to compute global sentence representations...
research
07/19/2017

Analysis of p-Laplacian Regularization in Semi-Supervised Learning

We investigate a family of regression problems in a semi-supervised sett...
research
06/07/2019

Semi-supervised Stochastic Multi-Domain Learning using Variational Inference

Supervised models of NLP rely on large collections of text which closely...

Please sign up or login with your details

Forgot password? Click here to reset