Analysing Errors of Open Information Extraction Systems

07/24/2017
by   Rudolf Schneider, et al.
0

We report results on benchmarking Open Information Extraction (OIE) systems using RelVis, a toolkit for benchmarking Open Information Extraction systems. Our comprehensive benchmark contains three data sets from the news domain and one data set from Wikipedia with overall 4522 labeled sentences and 11243 binary or n-ary OIE relations. In our analysis on these data sets we compared the performance of four popular OIE systems, ClausIE, OpenIE 4.2, Stanford OpenIE and PredPatt. In addition, we evaluated the impact of five common error classes on a subset of 749 n-ary tuples. From our deep analysis we unreveal important research directions for a next generation of OIE systems.

READ FULL TEXT
research
04/29/2019

Logician: A Unified End-to-End Neural Approach for Open-Domain Information Extraction

In this paper, we consider the problem of open information extraction (O...
research
07/10/2016

Open Information Extraction

Open Information Extraction (Open IE) systems aim to obtain relation tup...
research
08/24/2023

Introducing a New Alert Data Set for Multi-Step Attack Analysis

Intrusion detection systems (IDS) reinforce cyber defense by autonomousl...
research
06/14/2018

A Survey on Open Information Extraction

We provide a detailed overview of the various approaches that were propo...
research
07/09/2020

Open Data Quality Evaluation: A Comparative Analysis of Open Data in Latvia

Nowadays open data is entering the mainstream - it is free available for...
research
09/02/2019

Blended Integrated Open Data: dados abertos públicos integrados

While several public institutions provide its data openly, the effort re...
research
07/09/2020

Open Data Quality

The research discusses how (open) data quality could be described, what ...

Please sign up or login with your details

Forgot password? Click here to reset