A Survey of Knowledge-Enhanced Pre-trained Language Models

11/11/2022
by   Linmei Hu, et al.
0

Pre-trained Language Models (PLMs) which are trained on large text corpus via self-supervised learning method, have yielded promising performance on various tasks in Natural Language Processing (NLP). However, though PLMs with huge parameters can effectively possess rich knowledge learned from massive training text and benefit downstream tasks at the fine-tuning stage, they still have some limitations such as poor reasoning ability due to the lack of external knowledge. Research has been dedicated to incorporating knowledge into PLMs to tackle these issues. In this paper, we present a comprehensive review of Knowledge-Enhanced Pre-trained Language Models (KE-PLMs) to provide a clear insight into this thriving field. We introduce appropriate taxonomies respectively for Natural Language Understanding (NLU) and Natural Language Generation (NLG) to highlight these two main tasks of NLP. For NLU, we divide the types of knowledge into four categories: linguistic knowledge, text knowledge, knowledge graph (KG), and rule knowledge. The KE-PLMs for NLG are categorized into KG-based and retrieval-based methods. Finally, we point out some promising future directions of KE-PLMs.

READ FULL TEXT
research
12/27/2022

A Survey on Knowledge-Enhanced Pre-trained Language Models

Natural Language Processing (NLP) has been revolutionized by the use of ...
research
04/14/2021

K-PLUG: Knowledge-injected Pre-trained Language Model for Natural Language Understanding and Generation in E-Commerce

Existing pre-trained language models (PLMs) have demonstrated the effect...
research
03/10/2023

An Overview on Language Models: Recent Developments and Outlook

Language modeling studies the probability distributions over strings of ...
research
10/16/2021

Knowledge Enhanced Pretrained Language Models: A Compreshensive Survey

Pretrained Language Models (PLM) have established a new paradigm through...
research
07/24/2023

A Systematic Survey of Prompt Engineering on Vision-Language Foundation Models

Prompt engineering is a technique that involves augmenting a large pre-t...
research
03/18/2023

An Empirical Study of Pre-trained Language Models in Simple Knowledge Graph Question Answering

Large-scale pre-trained language models (PLMs) such as BERT have recentl...
research
04/25/2022

Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking

Passage re-ranking is to obtain a permutation over the candidate passage...

Please sign up or login with your details

Forgot password? Click here to reset