Latent Tree Learning with Differentiable Parsers: Shift-Reduce Parsing and Chart Parsing

06/03/2018
by   Jean Maillard, et al.
0

Latent tree learning models represent sentences by composing their words according to an induced parse tree, all based on a downstream task. These models often outperform baselines which use (externally provided) syntax trees to drive the composition order. This work contributes (a) a new latent tree learning model based on shift-reduce parsing, with competitive downstream performance and non-trivial induced trees, and (b) an analysis of the trees learned by our shift-reduce model and by a chart-based model.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/25/2017

Jointly Learning Sentence Embeddings and Syntax with Unsupervised Tree-LSTMs

We introduce a neural network that represents sentences by composing the...
research
08/29/2018

On Tree-Based Neural Sentence Modeling

Neural networks with tree-based sentence encoders have shown better resu...
research
09/04/2017

Learning to parse from a semantic objective: It works. Is it syntax?

Recent work on reinforcement learning and other gradient estimators for ...
research
11/28/2016

Learning to Compose Words into Sentences with Reinforcement Learning

We use reinforcement learning to learn tree-structured neural networks f...
research
04/11/2021

Does syntax matter? A strong baseline for Aspect-based Sentiment Analysis with RoBERTa

Aspect-based Sentiment Analysis (ABSA), aiming at predicting the polarit...
research
06/24/2019

Learning Latent Trees with Stochastic Perturbations and Differentiable Dynamic Programming

We treat projective dependency trees as latent variables in our probabil...
research
10/10/2020

Latent Tree Learning with Ordered Neurons: What Parses Does It Produce?

Recent latent tree learning models can learn constituency parsing withou...

Please sign up or login with your details

Forgot password? Click here to reset