TACRED Revisited: A Thorough Evaluation of the TACRED Relation Extraction Task

by   Christoph Alt, et al.

TACRED (Zhang et al., 2017) is one of the largest, most widely used crowdsourced datasets in Relation Extraction (RE). But, even with recent advances in unsupervised pre-training and knowledge enhanced neural RE, models still show a high error rate. In this paper, we investigate the questions: Have we reached a performance ceiling or is there still room for improvement? And how do crowd annotations, dataset, and models contribute to this error rate? To answer these questions, we first validate the most challenging 5K examples in the development and test sets using trained annotators. We find that label errors account for 8 examples need to be relabeled. On the relabeled test set the average F1 score of a large baseline model set improves from 62.1 to 70.1. After validation, we analyze misclassifications on the challenging instances, categorize them into linguistically motivated error groups, and verify the resulting error hypotheses on three state-of-the-art RE models. We show that two groups of ambiguous relations are responsible for most of the remaining errors and that models may adopt shallow heuristics on the dataset when entities are not masked.


Re-TACRED: Addressing Shortcomings of the TACRED Dataset

TACRED is one of the largest and most widely used sentence-level relatio...

MedDistant19: A Challenging Benchmark for Distantly Supervised Biomedical Relation Extraction

Relation Extraction in the biomedical domain is challenging due to the l...

Separating Retention from Extraction in the Evaluation of End-to-end Relation Extraction

State-of-the-art NLP models can adopt shallow heuristics that limit thei...

Exposing Shallow Heuristics of Relation Extraction Models with Challenge Data

The process of collecting and annotating training data may introduce dis...

Decorate the Examples: A Simple Method of Prompt Design for Biomedical Relation Extraction

Relation extraction is a core problem for natural language processing in...

Looking Beyond Label Noise: Shifted Label Distribution Matters in Distantly Supervised Relation Extraction

In recent years there is surge of interest in applying distant supervisi...

Robustly Pre-trained Neural Model for Direct Temporal Relation Extraction

Background: Identifying relationships between clinical events and tempor...

Please sign up or login with your details

Forgot password? Click here to reset