Budget Sensitive Reannotation of Noisy Relation Classification Data Using Label Hierarchy

12/26/2021
by   Akshay Parekh, et al.
2

Large crowd-sourced datasets are often noisy and relation classification (RC) datasets are no exception. Reannotating the entire dataset is one probable solution however it is not always viable due to time and budget constraints. This paper addresses the problem of efficient reannotation of a large noisy dataset for the RC. Our goal is to catch more annotation errors in the dataset while reannotating fewer instances. Existing work on RC dataset reannotation lacks the flexibility about how much data to reannotate. We introduce the concept of a reannotation budget to overcome this limitation. The immediate follow-up problem is: Given a specific reannotation budget, which subset of the data should we reannotate? To address this problem, we present two strategies to selectively reannotate RC datasets. Our strategies utilize the taxonomic hierarchy of relation labels. The intuition of our work is to rely on the graph distance between actual and predicted relation labels in the label hierarchy graph. We evaluate our reannotation strategies on the well-known TACRED dataset. We design our experiments to answer three specific research questions. First, does our strategy select novel candidates for reannotation? Second, for a given reannotation budget is our reannotation strategy more efficient at catching annotation errors? Third, what is the impact of data reannotation on RC model performance measurement? Experimental results show that our both reannotation strategies are novel and efficient. Our analysis indicates that the current reported performance of RC models on noisy TACRED data is inflated.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/15/2021

Learning with Noisy Labels by Targeted Relabeling

Crowdsourcing platforms are often used to collect datasets for training ...
research
12/22/2014

On Learning Vector Representations in Hierarchical Label Spaces

An important problem in multi-label classification is to capture label p...
research
10/31/2018

Crowdsourcing with Fairness, Diversity and Budget Constraints

Recent studies have shown that the labels collected from crowdworkers ca...
research
01/09/2023

Active Learning for Abstractive Text Summarization

Construction of human-curated annotated datasets for abstractive text su...
research
12/13/2017

Learning From Noisy Singly-labeled Data

Supervised learning depends on annotated examples, which are taken to be...
research
04/03/2023

Towards Integration of Discriminability and Robustness for Document-Level Relation Extraction

Document-level relation extraction (DocRE) predicts relations for entity...
research
07/02/2018

Active Testing: An Efficient and Robust Framework for Estimating Accuracy

Much recent work on visual recognition aims to scale up learning to mass...

Please sign up or login with your details

Forgot password? Click here to reset