Log In Sign Up

Exposing Shallow Heuristics of Relation Extraction Models with Challenge Data

by   Shachar Rosenman, et al.

The process of collecting and annotating training data may introduce distribution artifacts which may limit the ability of models to learn correct generalization behavior. We identify failure modes of SOTA relation extraction (RE) models trained on TACRED, which we attribute to limitations in the data annotation process. We collect and annotate a challenge-set we call Challenging RE (CRE), based on naturally occurring corpus examples, to benchmark this behavior. Our experiments with four state-of-the-art RE models show that they have indeed adopted shallow heuristics that do not generalize to the challenge-set data. Further, we find that alternative question answering modeling performs significantly better than the SOTA models on the challenge-set, despite worse overall TACRED performance. By adding some of the challenge data as training examples, the performance of the model improves. Finally, we provide concrete suggestion on how to improve RE data collection to alleviate this behavior.


page 1

page 2

page 3

page 4


Question Answering on Freebase via Relation Extraction and Textual Evidence

Existing knowledge-based question answering systems often rely on small ...

Separating Retention from Extraction in the Evaluation of End-to-end Relation Extraction

State-of-the-art NLP models can adopt shallow heuristics that limit thei...

A Question-answering Based Framework for Relation Extraction Validation

Relation extraction is an important task in knowledge acquisition and te...

TACRED Revisited: A Thorough Evaluation of the TACRED Relation Extraction Task

TACRED (Zhang et al., 2017) is one of the largest, most widely used crow...

Event Guided Denoising for Multilingual Relation Learning

General purpose relation extraction has recently seen considerable gains...

On the Importance of Adaptive Data Collection for Extremely Imbalanced Pairwise Tasks

Many pairwise classification tasks, such as paraphrase detection and ope...

Identifying and Benchmarking Natural Out-of-Context Prediction Problems

Deep learning systems frequently fail at out-of-context (OOC) prediction...