Understanding Knowledge Integration in Language Models with Graph Convolutions

02/02/2022
by   Yifan Hou, et al.
0

Pretrained language models (LMs) do not capture factual knowledge very well. This has led to the development of a number of knowledge integration (KI) methods which aim to incorporate external knowledge into pretrained LMs. Even though KI methods show some performance gains over vanilla LMs, the inner-workings of these methods are not well-understood. For instance, it is unclear how and what kind of knowledge is effectively integrated into these models and if such integration may lead to catastrophic forgetting of already learned knowledge. This paper revisits the KI process in these models with an information-theoretic view and shows that KI can be interpreted using a graph convolution operation. We propose a probe model called Graph Convolution Simulator (GCS) for interpreting knowledge-enhanced LMs and exposing what kind of knowledge is integrated into these models. We conduct experiments to verify that our GCS can indeed be used to correctly interpret the KI process, and we use it to analyze two well-known knowledge-enhanced LMs: ERNIE and K-Adapter, and find that only a small amount of factual knowledge is integrated in them. We stratify knowledge in terms of various relation types and find that ERNIE and K-Adapter integrate different kinds of knowledge to different extent. Our analysis also shows that simply increasing the size of the KI corpus may not lead to better KI; fundamental advances may be needed.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/16/2021

Knowledge Enhanced Pretrained Language Models: A Compreshensive Survey

Pretrained Language Models (PLM) have established a new paradigm through...
research
03/16/2021

Structural Adapters in Pretrained Language Models for AMR-to-text Generation

Previous work on text generation from graph-structured data relies on pr...
research
09/01/2021

Does Knowledge Help General NLU? An Empirical Study

It is often observed in knowledge-centric tasks (e.g., common sense ques...
research
10/29/2020

AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts

The remarkable success of pretrained language models has motivated the s...
research
04/12/2022

A Review on Language Models as Knowledge Bases

Recently, there has been a surge of interest in the NLP community on the...
research
11/18/2022

Context Variance Evaluation of Pretrained Language Models for Prompt-based Biomedical Knowledge Probing

Pretrained language models (PLMs) have motivated research on what kinds ...
research
12/22/2020

Seeing past words: Testing the cross-modal capabilities of pretrained V L models

We investigate the ability of general-purpose pretrained vision and lang...

Please sign up or login with your details

Forgot password? Click here to reset