A Two-step Approach for Handling Zero-Cardinality in Relation Extraction

02/20/2023
by   Pratik Saini, et al.
0

Relation tuple extraction from text is an important task for building knowledge bases. Recently, joint entity and relation extraction models have achieved very high F1 scores in this task. However, the experimental settings used by these models are restrictive and the datasets used in the experiments are not realistic. They do not include sentences with zero tuples (zero-cardinality). In this paper, we evaluate the state-of-the-art joint entity and relation extraction models in a more realistic setting. We include sentences that do not contain any tuples in our experiments. Our experiments show that there is significant drop (∼ 10-15% in one dataset and ∼ 6-14% in another dataset) in their F1 score in this setting. We also propose a two-step modeling using a simple BERT-based classifier that leads to improvement in the overall performance of these models in this realistic experimental setup.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/10/2021

D-REX: Dialogue Relation Extraction with Explanations

Existing research studies on cross-sentence relation extraction in long-...
research
05/16/2023

About Evaluation of F1 Score for RECENT Relation Extraction System

This document contains a discussion of the F1 score evaluation used in t...
research
04/16/2021

Re-TACRED: Addressing Shortcomings of the TACRED Dataset

TACRED is one of the largest and most widely used sentence-level relatio...
research
01/22/2018

Unsupervised Open Relation Extraction

We explore methods to extract relations between named entities from free...
research
04/10/2021

ZS-BERT: Towards Zero-Shot Relation Extraction with Attribute Representation Learning

While relation extraction is an essential task in knowledge acquisition ...
research
10/05/2021

FoodChem: A food-chemical relation extraction model

In this paper, we present FoodChem, a new Relation Extraction (RE) model...
research
05/23/2023

Preserving Knowledge Invariance: Rethinking Robustness Evaluation of Open Information Extraction

The robustness to distribution changes ensures that NLP models can be su...

Please sign up or login with your details

Forgot password? Click here to reset