BenchIE: Open Information Extraction Evaluation Based on Facts, Not Tokens

09/14/2021
by   Kiril Gashteovski, et al.
0

Intrinsic evaluations of OIE systems are carried out either manually – with human evaluators judging the correctness of extractions – or automatically, on standardized benchmarks. The latter, while much more cost-effective, is less reliable, primarily because of the incompleteness of the existing OIE benchmarks: the ground truth extractions do not include all acceptable variants of the same fact, leading to unreliable assessment of models' performance. Moreover, the existing OIE benchmarks are available for English only. In this work, we introduce BenchIE: a benchmark and evaluation framework for comprehensive evaluation of OIE systems for English, Chinese and German. In contrast to existing OIE benchmarks, BenchIE takes into account informational equivalence of extractions: our gold standard consists of fact synsets, clusters in which we exhaustively list all surface forms of the same fact. We benchmark several state-of-the-art OIE systems using BenchIE and demonstrate that these systems are significantly less effective than indicated by existing OIE benchmarks. We make BenchIE (data and evaluation code) publicly available.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/15/2021

AnnIE: An Annotation Platform for Constructing Complete Open Information Extraction Benchmark

Open Information Extraction (OIE) is the task of extracting facts from s...
research
07/13/2023

Generating Benchmarks for Factuality Evaluation of Language Models

Before deploying a language model (LM) within a given domain, it is impo...
research
05/22/2023

SEAHORSE: A Multilingual, Multifaceted Dataset for Summarization Evaluation

Reliable automatic evaluation of summarization systems is challenging du...
research
06/14/2023

Babel-ImageNet: Massively Multilingual Evaluation of Vision-and-Language Representations

Vision-and-language (VL) models with separate encoders for each modality...
research
05/23/2023

LLMs as Factual Reasoners: Insights from Existing Benchmarks and Beyond

With the recent appearance of LLMs in practical settings, having methods...
research
05/23/2023

Preserving Knowledge Invariance: Rethinking Robustness Evaluation of Open Information Extraction

The robustness to distribution changes ensures that NLP models can be su...
research
11/10/2021

Towards Practical Evaluation of Android ICC Resolution Techniques

Inter-component communication (ICC) is a key mechanism in mobile apps, w...

Please sign up or login with your details

Forgot password? Click here to reset