Neural Transition-based Syntactic Linearization

10/23/2018
by   Linfeng Song, et al.
0

The task of linearization is to find a grammatical order given a set of words. Traditional models use statistical methods. Syntactic linearization systems, which generate a sentence along with its syntactic tree, have shown state-of-the-art performance. Recent work shows that a multi-layer LSTM language model outperforms competitive statistical syntactic linearization systems without using syntax. In this paper, we study neural syntactic linearization, building a transition-based syntactic linearizer leveraging a feed-forward neural network, observing significantly better results compared to LSTM language models on this task.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/20/2022

When Does Syntax Mediate Neural Language Model Performance? Evidence from Dropout Probes

Recent causal probing literature reveals when language models and syntac...
research
04/28/2016

Word Ordering Without Syntax

Recent work on word ordering has argued that syntactic structure is impo...
research
11/07/2019

Transition-Based Deep Input Linearization

Traditional methods for deep NLG adopt pipeline approaches comprising st...
research
12/08/2022

The Neural Correlates of Linguistic Structure Building: Comments on Kazanina Tavano (2022)

A recent perspective paper by Kazanina Tavano (referred to as the KT...
research
09/16/2021

The Language Model Understood the Prompt was Ambiguous: Probing Syntactic Uncertainty Through Generation

Temporary syntactic ambiguities arise when the beginning of a sentence i...
research
05/07/2020

A Systematic Assessment of Syntactic Generalization in Neural Language Models

State-of-the-art neural network models have achieved dizzyingly low perp...
research
09/16/2020

Mimic and Conquer: Heterogeneous Tree Structure Distillation for Syntactic NLP

Syntax has been shown useful for various NLP tasks, while existing work ...

Please sign up or login with your details

Forgot password? Click here to reset