Facts as Experts: Adaptable and Interpretable Neural Memory over Symbolic Knowledge

07/02/2020
by   Pat Verga, et al.
0

Massive language models are the core of modern NLP modeling and have been shown to encode impressive amounts of commonsense and factual information. However, that knowledge exists only within the latent parameters of the model, inaccessible to inspection and interpretation, and even worse, factual information memorized from the training corpora is likely to become stale as the world changes. Knowledge stored as parameters will also inevitably exhibit all of the biases inherent in the source materials. To address these problems, we develop a neural language model that includes an explicit interface between symbolically interpretable factual information and subsymbolic neural knowledge. We show that this model dramatically improves performance on two knowledge-intensive question-answering tasks. More interestingly, the model can be updated without re-training by manipulating its symbolic representations. In particular this model allows us to add new facts and overwrite existing ones in ways that are not possible for earlier models.

READ FULL TEXT
research
05/25/2023

UFO: Unified Fact Obtaining for Commonsense Question Answering

Leveraging external knowledge to enhance the reasoning ability is crucia...
research
09/19/2019

Exploring ways to incorporate additional knowledge to improve Natural Language Commonsense Question Answering

DARPA and Allen AI have proposed a collection of datasets to encourage r...
research
08/12/2022

LM-CORE: Language Models with Contextually Relevant External Knowledge

Large transformer-based pre-trained language models have achieved impres...
research
01/17/2022

Generalizable Neuro-symbolic Systems for Commonsense Question Answering

This chapter illustrates how suitable neuro-symbolic models for language...
research
08/01/2016

A Neural Knowledge Language Model

Current language models have a significant limitation in the ability to ...
research
10/07/2022

Calibrating Factual Knowledge in Pretrained Language Models

Previous literature has proved that Pretrained Language Models (PLMs) ca...
research
05/02/2023

Can LMs Learn New Entities from Descriptions? Challenges in Propagating Injected Knowledge

Pre-trained language models (LMs) are used for knowledge intensive tasks...

Please sign up or login with your details

Forgot password? Click here to reset