Better, Faster, Stronger Sequence Tagging Constituent Parsers

02/28/2019
by   David Vilares, et al.
0

Sequence tagging models for constituent parsing are faster, but less accurate than other types of parsers. In this work, we address the following weaknesses of such constituent parsers: (a) high error rates around closing brackets of long constituents, (b) large label sets, leading to sparsity, and (c) error propagation arising from greedy decoding. To effectively close brackets, we train a model that learns to switch between tagging schemes. To reduce sparsity, we decompose the label set and use multi-task learning to jointly learn to predict sublabels. Finally, we mitigate issues from greedy decoding through auxiliary losses and sentence-level fine-tuning with policy gradient. Combining these techniques, we clearly surpass the performance of sequence tagging constituent parsers on the English and Chinese Penn Treebanks, and reduce their parsing time even further. On the SPMRL datasets, we observe even greater improvements across the board, including a new state of the art on Basque, Hebrew, Polish and Swedish.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/14/2022

On Parsing as Tagging

There have been many proposals to reduce constituency parsing to tagging...
research
06/04/2019

Multi-Task Semantic Dependency Parsing with Policy Gradient for Learning Easy-First Strategies

In Semantic Dependency Parsing (SDP), semantic relations form directed a...
research
03/06/2020

Is POS Tagging Necessary or Even Helpful for Neural Dependency Parsing?

In the pre deep learning era, part-of-speech tags have been considered a...
research
05/19/2020

On the Choice of Auxiliary Languages for Improved Sequence Tagging

Recent work showed that embeddings from related languages can improve th...
research
04/20/2017

Neural End-to-End Learning for Computational Argumentation Mining

We investigate neural techniques for end-to-end computational argumentat...
research
08/17/2023

Chinese Spelling Correction as Rephrasing Language Model

This paper studies Chinese Spelling Correction (CSC), which aims to dete...
research
06/28/2022

Dependency Parsing with Backtracking using Deep Reinforcement Learning

Greedy algorithms for NLP such as transition based parsing are prone to ...

Please sign up or login with your details

Forgot password? Click here to reset