-
An End-to-end Model for Entity-level Relation Extraction using Multi-instance Learning
We present a joint model for entity-level relation extraction from docum...
read it
-
Augmented Natural Language for Generative Sequence Labeling
We propose a generative framework for joint sequence labeling and senten...
read it
-
Deeper Task-Specificity Improves Joint Entity and Relation Extraction
Multi-task learning (MTL) is an effective method for learning related ta...
read it
-
Tale of tails using rule augmented sequence labeling for event extraction
The problem of event extraction is a relatively difficult task for low r...
read it
-
Joint entity recognition and relation extraction as a multi-head selection problem
State-of-the-art models for joint entity recognition and relation extrac...
read it
-
BERT-Based Multi-Head Selection for Joint Entity-Relation Extraction
In this paper, we report our method for the Information Extraction task ...
read it
-
Probing Linguistic Features of Sentence-Level Representations in Neural Relation Extraction
Despite the recent progress, little is known about the features captured...
read it
Structured Prediction as Translation between Augmented Natural Languages
We propose a new framework, Translation between Augmented Natural Languages (TANL), to solve many structured prediction language tasks including joint entity and relation extraction, nested named entity recognition, relation classification, semantic role labeling, event extraction, coreference resolution, and dialogue state tracking. Instead of tackling the problem by training task-specific discriminative classifiers, we frame it as a translation task between augmented natural languages, from which the task-relevant information can be easily extracted. Our approach can match or outperform task-specific models on all tasks, and in particular, achieves new state-of-the-art results on joint entity and relation extraction (CoNLL04, ADE, NYT, and ACE2005 datasets), relation classification (FewRel and TACRED), and semantic role labeling (CoNLL-2005 and CoNLL-2012). We accomplish this while using the same architecture and hyperparameters for all tasks and even when training a single model to solve all tasks at the same time (multi-task learning). Finally, we show that our framework can also significantly improve the performance in a low-resource regime, thanks to better use of label semantics.
READ FULL TEXT
Comments
There are no comments yet.