Improving Neural Parsing by Disentangling Model Combination and Reranking Effects

07/10/2017
by   Daniel Fried, et al.
0

Recent work has proposed several generative neural models for constituency parsing that achieve state-of-the-art results. Since direct search in these generative models is difficult, they have primarily been used to rescore candidate outputs from base parsers in which decoding is more straightforward. We first present an algorithm for direct search in these generative models. We then demonstrate that the rescoring results are at least partly due to implicit model combination rather than reranking effects. Finally, we show that explicit model combination can improve performance even further, resulting in new state-of-the-art numbers on the PTB of 94.25 F1 when training only on gold data and 94.66 F1 when using external data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/27/2017

Effective Inference for Generative Neural Parsing

Generative neural models have recently achieved state-of-the-art results...
research
09/24/2019

Neural Generative Rhetorical Structure Parsing

Rhetorical structure trees have been shown to be useful for several docu...
research
06/24/2020

Efficient Constituency Parsing by Pointing

We propose a novel constituency parsing model that casts the parsing pro...
research
05/02/2018

Constituency Parsing with a Self-Attentive Encoder

We demonstrate that replacing an LSTM encoder with a self-attentive arch...
research
07/20/2021

Paraphrasing via Ranking Many Candidates

We present a simple and effective way to generate a variety of paraphras...
research
11/01/2022

Order-sensitive Neural Constituency Parsing

We propose a novel algorithm that improves on the previous neural span-b...
research
09/07/2015

An end-to-end generative framework for video segmentation and recognition

We describe an end-to-end generative approach for the segmentation and r...

Please sign up or login with your details

Forgot password? Click here to reset