Training Language Models with Memory Augmentation

05/25/2022
by   Zexuan Zhong, et al.
0

Recent work has improved language models remarkably by equipping them with a non-parametric memory component. However, most existing approaches only introduce memories at testing time, or represent them using a separately trained encoder – resulting in sub-optimal training of the language model. In this work, we present TRIME, a novel yet simple training approach designed for training language models with memory augmentation. Our approach uses a training objective that directly takes in-batch examples as accessible memory. We also present new methods for memory construction and data batching, which are used for adapting to different sets of memories – local, long-term, and external memory – at testing time. We evaluate our approach on multiple language modeling and machine translation benchmarks. We find that simply replacing the vanilla language modeling objective by ours greatly reduces the perplexity, without modifying the model architecture or incorporating extra context (e.g., 18.70 → 17.76 on WikiText-103). We further augment language models with long-range contexts and external knowledge and demonstrate significant gains over previous memory-augmented approaches.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/12/2023

Augmenting Language Models with Long-Term Memory

Existing large language models (LLMs) can only afford fix-sized inputs d...
research
03/16/2022

Memorizing Transformers

Language models typically need to be trained or finetuned in order to ac...
research
05/24/2023

Adapting Language Models to Compress Contexts

Transformer-based language models (LMs) are powerful and widely-applicab...
research
05/22/2023

Adaptive Chameleon or Stubborn Sloth: Unraveling the Behavior of Large Language Models in Knowledge Clashes

By providing external information to large language models (LLMs), tool ...
research
03/27/2018

Fast Parametric Learning with Activation Memorization

Neural networks trained with backpropagation often struggle to identify ...
research
02/15/2017

Frustratingly Short Attention Spans in Neural Language Modeling

Neural language models predict the next token using a latent representat...
research
02/15/2023

Augmented Language Models: a Survey

This survey reviews works in which language models (LMs) are augmented w...

Please sign up or login with your details

Forgot password? Click here to reset