Generalizing Natural Language Analysis through Span-relation Representations

A large number of natural language processing tasks exist to analyze syntax, semantics, and information content of human language. These seemingly very different tasks are usually solved by specially designed architectures. In this paper, we provide the simple insight that a great variety of tasks can be represented in a single unified format consisting of labeling spans and relations between spans, thus a single task-independent model can be used across different tasks. We perform extensive experiments to test this insight on 10 disparate tasks as broad as dependency parsing (syntax), semantic role labeling (semantics), relation extraction (information content), aspect based sentiment analysis (sentiment), and many others, achieving comparable performance as state-of-the-art specialized models. We further demonstrate benefits in multi-task learning. We convert these datasets into a unified format to build a benchmark, which provides a holistic testbed for evaluating future models for generalized natural language analysis.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

06/20/2018

The Natural Language Decathlon: Multitask Learning as Question Answering

Deep learning has improved performance on many natural language processi...
04/04/2020

A Dependency Syntactic Knowledge Augmented Interactive Architecture for End-to-End Aspect-based Sentiment Analysis

The aspect-based sentiment analysis (ABSA) task remains to be a long-sta...
06/17/2019

An Interactive Multi-Task Learning Network for End-to-End Aspect-Based Sentiment Analysis

Aspect-based sentiment analysis produces a list of aspect terms and thei...
07/07/2019

Improving Cross-Domain Performance for Relation Extraction via Dependency Prediction and Information Flow Control

Relation Extraction (RE) is one of the fundamental tasks in Information ...
04/20/2021

Subsentence Extraction from Text Using Coverage-Based Deep Learning Language Models

Sentiment prediction remains a challenging and unresolved task in variou...
02/25/2019

Cooperative Learning of Disjoint Syntax and Semantics

There has been considerable attention devoted to models that learn to jo...
09/20/2019

Dependency-based Text Graphs for Keyphrase and Summary Extraction with Applications to Interactive Content Retrieval

We build a bridge between neural network-based machine learning and grap...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

A large number of natural language processing (NLP) tasks exist to analyze various aspects of human language, including syntax (e.g., constituency and dependency parsing), semantics (e.g., semantic role labeling), information content (e.g., named entity recognition and relation extraction), or sentiment (e.g. sentiment analysis). At first glance, these tasks are seemingly very different in both the structure of their output and the variety of information that they try to capture. To handle these different characteristics, researchers usually use specially designed neural network architectures. In this paper we ask the simple questions: are the task-specific architectures really necessary? Or with the appropriate representational methodology, can we devise

a single model that can perform — and achieve state-of-the-art performance on — a large number of natural language analysis tasks?

Figure 1: An example from BRAT, consisting of POS, NER, and RE.

Interestingly, in the domain of efficient human annotation interfaces, it is already standard to use unified representations for a wide variety of NLP tasks. On the right we show one example of the annotation interface BRAT (stenetorp-etal-2012-brat), which has been used for annotating data for tasks as broad as part-of-speech tagging, named entity recognition, relation extraction, and many others. Notably, this interface has a single unified format that consists of spans (e.g. the span of an entity), labels on the spans (e.g. the variety of entity such as “person” or “location”), and labeled relations between the spans (e.g. “born-in”). These labeled relations can form a tree or graph structure (e.g., dependency tree), expressing the linguistic structure of sentences. We detail this BRAT format and how it can be used to represent a wide number of natural language analysis tasks in Section LABEL:sec:brat.

The simple hypothesis behind our paper is: if humans can perform natural language analysis in a single unified format, then perhaps machines can as well. Fortunately, there already exist NLP models that perform span prediction and prediction of relations between pairs of spans, such as the end-to-end neural coreference model of lee:17:e2ecoref. We extend this model with minor architectural modifications (which are not our core contributions) and pre-trained contextualized representations (e.g., BERT; devlin-etal-2019-bert111To contrast to work on pre-trained contextualized representations like ELMo (peters-etal-2018-deep) or BERT (devlin-etal-2019-bert), these works learn unified features to represent the input in different tasks, whereas we propose a unified representational methodology that represents the output of different tasks. Analysis models using BERT still designed special-purpose output predictors for specific tasks or task classes.) then demonstrate the applicability and versatility of this single model on 10 tasks, including named entity recognition (NER), relation extraction (RE), coreference resolution (Coref.), open information extraction (OpenIE), part-of-speech tagging (POS), dependency parsing (Dep.), constituency parsing (Consti.), semantic role labeling (SRL), aspect based sentiment analysis (ABSA), and opinion role labeling (ORL). While previous work has used similar formalisms to understand the representations learned by pre-trained embeddings (tenney:19:bertpipeline; tenney:19:bertprobe), to the best of our knowledge this is the first work that uses such a unified model to actually perform analysis. Moreover, despite it simplicity we demonstrate that such a model can achieve comparable performance with special-purpose state-of-the-art models on the tasks above (Table 1). We also demonstrate that this framework allows us to easily perform multi-task learning among different tasks, leading to improvements when there are related tasks to be learned from or data is sparse. In summary, our contributions are:

  • [leftmargin=15pt]

  • We provide the simple insight that a great variety of natural language analysis tasks can be represented and solved in a single unified format, i.e., span-relation representations. This insight may seem obvious in hindsight, but it has not been examined, particularly to this scale, by previous work on model-building for NLP.

  • We perform extensive experiments to test this insight on 10 disparate tasks, achieving comparable empirical results as the state-of-the-art, using a single task-independent modeling framework.

  • We further use this framework to perform an analysis of the benefits from multi-task learning across all of the tasks above, gleaning various insights about task relatedness and how multi-task learning performs with different token representations.

  • In addition, we will release our code and the General Language Analysis Datasets (GLAD) benchmark with 8 datasets covering 10 tasks in the BRAT format on https://github.com/jzbjyb/SpanRel, and provide a leaderboard to facilitate future work on generalized models for NLP. Compared to the full sentence-level tasks in the GLUE leaderboard (wang:19:superglue; wang:19:glue), we cover a wide variety of natural language analysis tasks that require analyzing of the finer grained text units (e.g., words, phrases, clauses).

Information Extraction POS Parsing SRL Sentiment
NER RE Coref. OpenIE Dep. Consti. ABSA ORL
Different Models for Different Tasks
ELMo (peters-etal-2018-deep)
BERT (devlin-etal-2019-bert)
BERT baseline (shi:19:bertrcsrl)
SpanBERT (joshi:19:spanbert)
Single Model for Different Tasks
guo-etal-2016-unified
swayamdipta-etal-2018-syntactic
strubell-etal-2018-linguistically
clark-etal-2018-semi
luan:18:mtlie; luan-etal-2019-general
dixit-al-onaizan-2019-span
marasovic-frank-2018-srl4orl
hashimoto:17:mtlsocher
This Work
Table 1: The unified span-relation model can work on multiple NLP tasks, in contrast to previous works usually designed for a subset of tasks.
Task Spans annotated with labels
NER was born in .
Consti.
POS ?
ABSA Great laptop that offers many great !
Span-oriented tasks. Spans are annotated by underlines and their labels.