How Much Knowledge Can You Pack Into the Parameters of a Language Model?

02/10/2020
by   Adam Roberts, et al.
0

It has recently been observed that neural language models trained on unstructured text can implicitly store and retrieve knowledge using natural language queries. In this short paper, we measure the practical utility of this approach by fine-tuning pre-trained models to answer questions without access to any external context or knowledge. We show that this approach scales surprisingly well with model size and outperforms models that explicitly look up knowledge on the open-domain variants of Natural Questions and WebQuestions.

READ FULL TEXT
research
11/19/2020

Are Pre-trained Language Models Knowledgeable to Ground Open Domain Dialogues?

We study knowledge-grounded dialogue generation with pre-trained languag...
research
11/19/2019

Unsupervised Natural Question Answering with a Small Model

The recent (2019-02) demonstration of the power of huge language models ...
research
05/01/2021

When to Fold'em: How to answer Unanswerable questions

We present 3 different question-answering models trained on the SQuAD2.0...
research
12/06/2022

CySecBERT: A Domain-Adapted Language Model for the Cybersecurity Domain

The field of cybersecurity is evolving fast. Experts need to be informed...
research
06/02/2023

Unsupervised Paraphrasing of Multiword Expressions

We propose an unsupervised approach to paraphrasing multiword expression...
research
10/28/2022

Knowledge-in-Context: Towards Knowledgeable Semi-Parametric Language Models

Fully-parametric language models generally require a huge number of mode...
research
03/08/2021

Language Models have a Moral Dimension

Artificial writing is permeating our lives due to recent advances in lar...

Please sign up or login with your details

Forgot password? Click here to reset