How to best use Syntax in Semantic Role Labelling

06/01/2019
by   Yufei Wang, et al.
0

There are many different ways in which external information might be used in an NLP task. This paper investigates how external syntactic information can be used most effectively in the Semantic Role Labeling (SRL) task. We evaluate three different ways of encoding syntactic parses and three different ways of injecting them into a state-of-the-art neural ELMo-based SRL sequence labelling model. We show that using a constituency representation as input features improves performance the most, achieving a new state-of-the-art for non-ensemble SRL models on the in-domain CoNLL'05 and CoNLL'12 benchmarks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/24/2019

Syntax-Enhanced Self-Attention-Based Semantic Role Labeling

As a fundamental NLP task, semantic role labeling (SRL) aims to discover...
research
07/22/2019

Syntax-aware Neural Semantic Role Labeling

Semantic role labeling (SRL), also known as shallow semantic parsing, is...
research
03/05/2021

Syntactic and Semantic-driven Learning for Open Information Extraction

One of the biggest bottlenecks in building accurate, high coverage neura...
research
10/24/2018

Automatic Identification of Indicators of Compromise using Neural-Based Sequence Labelling

Indicators of Compromise (IOCs) are artifacts observed on a network or i...
research
08/31/2022

Connecticut Redistricting Analysis

Connecticut passed their new state House of Representatives district pla...
research
06/15/2021

Semantic Representation and Inference for NLP

Semantic representation and inference is essential for Natural Language ...
research
02/22/2017

Feature Generation for Robust Semantic Role Labeling

Hand-engineered feature sets are a well understood method for creating r...

Please sign up or login with your details

Forgot password? Click here to reset