UNTER: A Unified Knowledge Interface for Enhancing Pre-trained Language Models

05/02/2023
by   Deming Ye, et al.
0

Recent research demonstrates that external knowledge injection can advance pre-trained language models (PLMs) in a variety of downstream NLP tasks. However, existing knowledge injection methods are either applicable to structured knowledge or unstructured knowledge, lacking a unified usage. In this paper, we propose a UNified knowledge inTERface, UNTER, to provide a unified perspective to exploit both structured knowledge and unstructured knowledge. In UNTER, we adopt the decoder as a unified knowledge interface, aligning span representations obtained from the encoder with their corresponding knowledge. This approach enables the encoder to uniformly invoke span-related knowledge from its parameters for downstream applications. Experimental results show that, with both forms of knowledge injected, UNTER gains continuous improvements on a series of knowledge-driven NLP tasks, including entity typing, named entity recognition and relation extraction, especially in low-resource scenarios.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/28/2023

Plug-and-Play Knowledge Injection for Pre-trained Language Models

Injecting external knowledge can improve the performance of pre-trained ...
research
12/30/2020

ERICA: Improving Entity and Relation Understanding for Pre-trained Language Models via Contrastive Learning

Pre-trained Language Models (PLMs) have shown strong performance in vari...
research
10/07/2022

A Unified Encoder-Decoder Framework with Entity Memory

Entities, as important carriers of real-world knowledge, play a key role...
research
05/17/2023

UniEX: An Effective and Efficient Framework for Unified Information Extraction via a Span-extractive Perspective

We propose a new paradigm for universal information extraction (IE) that...
research
07/28/2022

MLRIP: Pre-training a military language representation model with informative factual knowledge and professional knowledge base

Incorporating prior knowledge into pre-trained language models has prove...
research
09/29/2020

Contextual Knowledge Selection and Embedding towards Enhanced Pre-Trained Language Models

Several recent efforts have been devoted to enhancing pre-trained langua...
research
01/27/2022

Ontology-enhanced Prompt-tuning for Few-shot Learning

Few-shot Learning (FSL) is aimed to make predictions based on a limited ...

Please sign up or login with your details

Forgot password? Click here to reset