Improving Distantly Supervised Relation Extraction by Natural Language Inference

07/31/2022
by   Kang Zhou, et al.
0

To reduce human annotations for relation extraction (RE) tasks, distantly supervised approaches have been proposed, while struggling with low performance. In this work, we propose a novel DSRE-NLI framework, which considers both distant supervision from existing knowledge bases and indirect supervision from pretrained language models for other tasks. DSRE-NLI energizes an off-the-shelf natural language inference (NLI) engine with a semi-automatic relation verbalization (SARV) mechanism to provide indirect supervision and further consolidates the distant annotations to benefit multi-classification RE models. The NLI-based indirect supervision acquires only one relation verbalization template from humans as a semantically general template for each relationship, and then the template set is enriched by high-quality textual patterns automatically mined from the distantly annotated corpus. With two simple and effective data consolidation strategies, the quality of training data is substantially improved. Extensive experiments demonstrate that the proposed framework significantly improves the SOTA performance (up to 7.73% of F1) on distantly supervised RE benchmark datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/22/2021

Improving Distantly Supervised Relation Extraction with Self-Ensemble Noise Filtering

Distantly supervised models are very popular for relation extraction sin...
research
10/30/2017

Indirect Supervision for Relation Extraction using Question-Answer Pairs

Automatic relation extraction (RE) for types of interest is of great imp...
research
12/21/2022

Can NLI Provide Proper Indirect Supervision for Low-resource Biomedical Relation Extraction?

Two key obstacles in biomedical relation extraction (RE) are the scarcit...
research
05/19/2022

Summarization as Indirect Supervision for Relation Extraction

Relation extraction (RE) models have been challenged by their reliance o...
research
06/28/2023

LLM Calibration and Automatic Hallucination Detection via Pareto Optimal Self-supervision

Large language models (LLMs) have demonstrated remarkable capabilities o...
research
10/29/2020

AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts

The remarkable success of pretrained language models has motivated the s...
research
05/11/2022

Pre-trained Language Models as Re-Annotators

Annotation noise is widespread in datasets, but manually revising a flaw...

Please sign up or login with your details

Forgot password? Click here to reset