The Effectiveness of Masked Language Modeling and Adapters for Factual Knowledge Injection

10/03/2022
by   Sondre Wold, et al.
0

This paper studies the problem of injecting factual knowledge into large pre-trained language models. We train adapter modules on parts of the ConceptNet knowledge graph using the masked language modeling objective and evaluate the success of the method by a series of probing experiments on the LAMA probe. Mean P@K curves for different configurations indicate that the technique is effective, increasing the performance on subsets of the LAMA probe for large values of k by adding as little as 2.1 original models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/12/2020

On the Complementary Nature of Knowledge Graph Embedding, Fine Grain Entity Types, and Language Modeling

We demonstrate the complementary natures of neural knowledge graph embed...
research
06/20/2023

ChatGPT is not Enough: Enhancing Large Language Models with Knowledge Graphs for Fact-aware Language Modeling

Recently, ChatGPT, a representative large language model (LLM), has gain...
research
08/21/2019

Latent Relation Language Models

In this paper, we propose Latent Relation Language Models (LRLMs), a cla...
research
05/24/2023

Text-Augmented Open Knowledge Graph Completion via Pre-Trained Language Models

The mission of open knowledge graph (KG) completion is to draw new findi...
research
06/01/2023

Exposing Attention Glitches with Flip-Flop Language Modeling

Why do large language models sometimes output factual inaccuracies and e...
research
12/14/2021

Towards Interactive Language Modeling

Interaction between caregivers and children plays a critical role in hum...
research
02/25/2022

On the data requirements of probing

As large and powerful neural language models are developed, researchers ...

Please sign up or login with your details

Forgot password? Click here to reset