Zero-Shot Information Extraction as a Unified Text-to-Triple Translation

09/23/2021
by   Chenguang Wang, et al.
0

We cast a suite of information extraction tasks into a text-to-triple translation framework. Instead of solving each task relying on task-specific datasets and models, we formalize the task as a translation between task-specific input text and output triples. By taking the task-specific input, we enable a task-agnostic translation by leveraging the latent knowledge that a pre-trained language model has about the task. We further demonstrate that a simple pre-training task of predicting which relational information corresponds to which input text is an effective way to produce task-specific outputs. This enables the zero-shot transfer of our framework to downstream tasks. We study the zero-shot performance of this framework on open information extraction (OIE2016, NYT, WEB, PENN), relation classification (FewRel and TACRED), and factual probe (Google-RE and T-REx). The model transfers non-trivially to most tasks and is often competitive with a fully supervised method without the need for any task-specific training. For instance, we significantly outperform the F1 score of the supervised open information extraction without needing to use its training set.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/25/2022

Large Language Models are Zero-Shot Clinical Information Extractors

We show that large language models, such as GPT-3, perform well at zero-...
research
01/02/2021

Zero-shot Learning by Generating Task-specific Adapters

Pre-trained text-to-text transformers achieve impressive performance acr...
research
06/01/2023

Exploring the Versatility of Zero-Shot CLIP for Interstitial Lung Disease Classification

Interstitial lung diseases (ILD) present diagnostic challenges due to th...
research
02/05/2022

Zero Experience Required: Plug Play Modular Transfer Learning for Semantic Visual Navigation

In reinforcement learning for visual navigation, it is common to develop...
research
01/09/2023

Universal Information Extraction as Unified Semantic Matching

The challenge of information extraction (IE) lies in the diversity of la...
research
05/19/2022

Are Prompt-based Models Clueless?

Finetuning large pre-trained language models with a task-specific head h...
research
02/09/2023

Zero-Shot Learning for Requirements Classification: An Exploratory Study

Context: Requirements engineering researchers have been experimenting wi...

Please sign up or login with your details

Forgot password? Click here to reset