Sequence Labeling Parsing by Learning Across Representations

07/02/2019
by   Michalina Strzyz, et al.
0

We use parsing as sequence labeling as a common framework to learn across constituency and dependency syntactic abstractions. To do so, we cast the problem as multitask learning (MTL). First, we show that adding a parsing paradigm as an auxiliary loss consistently improves the performance on the other paradigm. Secondly, we explore an MTL sequence labeling model that parses both representations, at almost no cost in terms of performance and speed. The results across the board show that on average MTL models with auxiliary losses for constituency parsing outperform single-task ones by 1.05 F1 points, and for dependency parsing by 0.62 UAS points.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/27/2019

Viable Dependency Parsing as Sequence Labeling

We recast dependency parsing as a sequence labeling problem, exploring s...
research
01/28/2022

Schema-Free Dependency Parsing via Sequence Generation

Dependency parsing aims to extract syntactic dependency structure or sem...
research
10/23/2020

NLNDE at CANTEMIST: Neural Sequence Labeling and Parsing Approaches for Clinical Concept Extraction

The recognition and normalization of clinical information, such as tumor...
research
09/13/2023

Résumé Parsing as Hierarchical Sequence Labeling: An Empirical Study

Extracting information from résumés is typically formulated as a two-sta...
research
04/19/2022

ATP: AMRize Then Parse! Enhancing AMR Parsing with PseudoAMRs

As Abstract Meaning Representation (AMR) implicitly involves compound se...
research
10/01/2020

Discontinuous Constituent Parsing as Sequence Labeling

This paper reduces discontinuous parsing to sequence labeling. It first ...

Please sign up or login with your details

Forgot password? Click here to reset