Online Segment to Segment Neural Transduction

09/26/2016
by   Lei Yu, et al.
0

We introduce an online neural sequence to sequence model that learns to alternate between encoding and decoding segments of the input as it is read. By independently tracking the encoding and decoding representations our algorithm permits exact polynomial marginalization of the latent segmentation during training, and during decoding beam search is employed to find the best alignment path together with the predicted output sequence. Our model tackles the bottleneck of vanilla encoder-decoders that have to read and memorize the entire input sequence in their fixed-length hidden states before producing any output. It is different from previous attentive models in that, instead of treating the attention weights as output of a deterministic function, our model assigns attention weights to a sequential latent variable which can be marginalized out and permits online generation. Experiments on abstractive sentence summarization and morphological inflection show significant performance gains over the baseline encoder-decoders.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/06/2019

Synchronous Transformers for End-to-End Speech Recognition

For most of the attention-based sequence-to-sequence models, the decoder...
research
06/25/2018

Prior Attention for Style-aware Sequence-to-Sequence Models

We extend sequence-to-sequence models with the possibility to control th...
research
11/08/2016

The Neural Noisy Channel

We formulate sequence to sequence transduction as a noisy channel decodi...
research
10/10/2016

Latent Sequence Decompositions

We present the Latent Sequence Decompositions (LSD) framework. LSD decom...
research
06/06/2021

Structured Reordering for Modeling Latent Alignments in Sequence Transduction

Despite success in many domains, neural models struggle in settings wher...
research
10/30/2015

Generating Text with Deep Reinforcement Learning

We introduce a novel schema for sequence to sequence learning with a Dee...
research
07/17/2020

Sequential Segment-based Level Generation and Blending using Variational Autoencoders

Existing methods of level generation using latent variable models such a...

Please sign up or login with your details

Forgot password? Click here to reset