Relational Memory Augmented Language Models

01/24/2022
by   Qi Liu, et al.
1

We present a memory-augmented approach to condition an autoregressive language model on a knowledge graph. We represent the graph as a collection of relation triples and retrieve relevant relations for a given context to improve text generation. Experiments on WikiText-103, WMT19, and enwik8 English datasets demonstrate that our approach produces a better language model in terms of perplexity and bits per character. We also show that relational memory improves coherence, is complementary to token-based memory, and enables causal interventions. Our model provides a simple yet effective way to combine an autoregressive language model with a knowledge graph for a more coherent and logical generation.

READ FULL TEXT
research
08/21/2019

Latent Relation Language Models

In this paper, we propose Latent Relation Language Models (LRLMs), a cla...
research
06/17/2019

Barack's Wife Hillary: Using Knowledge-Graphs for Fact-Aware Language Modeling

Modeling human language requires the ability to not only generate fluent...
research
09/08/2021

Memory and Knowledge Augmented Language Models for Inferring Salience in Long-Form Stories

Measuring event salience is essential in the understanding of stories. T...
research
07/19/2023

Efficient Guided Generation for Large Language Models

In this article we describe an efficient approach to guiding language mo...
research
08/01/2016

A Neural Knowledge Language Model

Current language models have a significant limitation in the ability to ...
research
09/20/2023

Retrieve-Rewrite-Answer: A KG-to-Text Enhanced LLMs Framework for Knowledge Graph Question Answering

Despite their competitive performance on knowledge-intensive tasks, larg...
research
08/26/2023

Planning with Logical Graph-based Language Model for Instruction Generation

Despite the superior performance of large language models to generate na...

Please sign up or login with your details

Forgot password? Click here to reset