Transition-Based Deep Input Linearization

11/07/2019
by   Ratish Puduppully, et al.
0

Traditional methods for deep NLG adopt pipeline approaches comprising stages such as constructing syntactic input, predicting function words, linearizing the syntactic input and generating the surface forms. Though easier to visualize, pipeline approaches suffer from error propagation. In addition, information available across modules cannot be leveraged by all modules. We construct a transition-based model to jointly perform linearization, function word prediction and morphological generation, which considerably improves upon the accuracy compared to a pipelined baseline system. On a standard deep input linearization shared task, our system achieves the best results reported so far.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/20/2022

Transition-based Semantic Role Labeling with Pointer Networks

Semantic role labeling (SRL) focuses on recognizing the predicate-argume...
research
10/23/2018

Neural Transition-based Syntactic Linearization

The task of linearization is to find a grammatical order given a set of ...
research
12/16/2015

Morpho-syntactic Lexicon Generation Using Graph-based Semi-supervised Learning

Morpho-syntactic lexicons provide information about the morphological an...
research
05/04/2019

Contextualization of Morphological Inflection

Critical to natural language generation is the production of correctly i...
research
07/24/2017

Transition-Based Generation from Abstract Meaning Representations

This work addresses the task of generating English sentences from Abstra...
research
04/25/2022

SyntaSpeech: Syntax-Aware Generative Adversarial Text-to-Speech

The recent progress in non-autoregressive text-to-speech (NAR-TTS) has m...

Please sign up or login with your details

Forgot password? Click here to reset