When to Use What: An In-Depth Comparative Empirical Analysis of OpenIE Systems for Downstream Applications

11/15/2022
by   Kevin Pei, et al.
0

Open Information Extraction (OpenIE) has been used in the pipelines of various NLP tasks. Unfortunately, there is no clear consensus on which models to use in which tasks. Muddying things further is the lack of comparisons that take differing training sets into account. In this paper, we present an application-focused empirical survey of neural OpenIE models, training sets, and benchmarks in an effort to help users choose the most suitable OpenIE systems for their applications. We find that the different assumptions made by different models and datasets have a statistically significant effect on performance, making it important to choose the most appropriate model for one's applications. We demonstrate the applicability of our recommendations on a downstream Complex QA application.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/15/2020

Coreferential Reasoning Learning for Language Representation

Language representation models such as BERT could effectively capture co...
research
09/27/2017

Replicability Analysis for Natural Language Processing: Testing Significance with Multiple Datasets

With the ever-growing amounts of textual data from a large variety of la...
research
04/23/2023

Exploring Challenges of Deploying BERT-based NLP Models in Resource-Constrained Embedded Devices

BERT-based neural architectures have established themselves as popular s...
research
10/30/2019

Contextual Text Denoising with Masked Language Models

Recently, with the help of deep learning models, significant advances ha...
research
06/14/2021

An Empirical Survey of Data Augmentation for Limited Data Learning in NLP

NLP has achieved great progress in the past decade through the use of ne...
research
07/04/2023

All in One: Multi-task Prompting for Graph Neural Networks

Recently, ”pre-training and fine-tuning” has been adopted as a standard ...
research
09/27/2017

KeyVec: Key-semantics Preserving Document Representations

Previous studies have demonstrated the empirical success of word embeddi...

Please sign up or login with your details

Forgot password? Click here to reset