Understanding Finetuning for Factual Knowledge Extraction from Language Models

01/26/2023
by   Mehran Kazemi, et al.
0

Language models (LMs) pretrained on large corpora of text from the web have been observed to contain large amounts of various types of knowledge about the world. This observation has led to a new and exciting paradigm in knowledge graph construction where, instead of manual curation or text mining, one extracts knowledge from the parameters of an LM. Recently, it has been shown that finetuning LMs on a set of factual knowledge makes them produce better answers to queries from a different set, thus making finetuned LMs a good candidate for knowledge extraction and, consequently, knowledge graph construction. In this paper, we analyze finetuned LMs for factual knowledge extraction. We show that along with its previously known positive effects, finetuning also leads to a (potentially harmful) phenomenon which we call Frequency Shock, where at the test time the model over-predicts rare entities that appear in the training set and under-predicts common entities that do not appear in the training set enough times. We show that Frequency Shock leads to a degradation in the predictions of the model and beyond a point, the harm from Frequency Shock can even outweigh the positive effects of finetuning, making finetuning harmful overall. We then consider two solutions to remedy the identified negative effect: 1- model mixing and 2- mixture finetuning with the LM's pre-training task. The two solutions combined lead to significant improvements compared to vanilla finetuning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/26/2022

Task-specific Pre-training and Prompt Decomposition for Knowledge Graph Population with Language Models

We present a system for knowledge graph population with Language Models,...
research
11/16/2021

Interpreting Language Models Through Knowledge Graph Extraction

Transformer-based language models trained on large text corpora have enj...
research
10/21/2022

What do Large Language Models Learn beyond Language?

Large language models (LMs) have rapidly become a mainstay in Natural La...
research
08/20/2022

Representing Knowledge by Spans: A Knowledge-Enhanced Model for Information Extraction

Knowledge-enhanced pre-trained models for language representation have b...
research
11/18/2022

Knowledge Graph Generation From Text

In this work we propose a novel end-to-end multi-stage Knowledge Graph (...
research
04/25/2022

Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking

Passage re-ranking is to obtain a permutation over the candidate passage...
research
05/21/2022

Scaling Laws and Interpretability of Learning from Repeated Data

Recent large language models have been trained on vast datasets, but als...

Please sign up or login with your details

Forgot password? Click here to reset