Deep Semantic Role Labeling with Self-Attention

12/05/2017
by   Zhixing Tan, et al.
0

Semantic Role Labeling (SRL) is believed to be a crucial step towards natural language understanding and has been widely studied. Recent years, end-to-end SRL with recurrent neural networks (RNN) has gained increasing attention. However, it remains a major challenge for RNNs to handle structural information and long range dependencies. In this paper, we present a simple and effective architecture for SRL which aims to address these problems. Our model is based on self-attention which can directly capture the relationships between two tokens regardless of their distance. Our single model achieves F_1=83.4 on the CoNLL-2005 shared task dataset and F_1=82.7 on the CoNLL-2012 shared task dataset, which outperforms the previous state-of-the-art results by 1.8 and 1.0 F_1 score respectively. Besides, our model is computationally efficient, and the parsing speed is 50K tokens per second on a single Titan X GPU.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/23/2021

CSAGN: Conversational Structure Aware Graph Network for Conversational Semantic Role Labeling

Conversational semantic role labeling (CSRL) is believed to be a crucial...
research
01/11/2021

ORDNet: Capturing Omni-Range Dependencies for Scene Parsing

Learning to capture dependencies between spatial positions is essential ...
research
04/23/2018

Linguistically-Informed Self-Attention for Semantic Role Labeling

The current state-of-the-art end-to-end semantic role labeling (SRL) mod...
research
08/24/2019

Enhancing Neural Sequence Labeling with Position-Aware Self-Attention

Sequence labeling is a fundamental task in natural language processing a...
research
09/08/2018

Attentive Semantic Role Labeling with Boundary Indicator

The goal of semantic role labeling (SRL) is to discover the predicate-ar...
research
12/18/2022

A Robust Semantic Frame Parsing Pipeline on a New Complex Twitter Dataset

Most recent semantic frame parsing systems for spoken language understan...
research
01/01/2019

Text Infilling

Recent years have seen remarkable progress of text generation in differe...

Please sign up or login with your details

Forgot password? Click here to reset