TemporalWiki: A Lifelong Benchmark for Training and Evaluating Ever-Evolving Language Models

04/29/2022
by   Joel Jang, et al.
13

Language Models (LMs) become outdated as the world changes; they often fail to perform tasks requiring recent factual information which was absent or different during training, a phenomenon called temporal misalignment. This is especially a challenging problem because the research community still lacks a coherent dataset for assessing the adaptability of LMs to frequently-updated knowledge corpus such as Wikipedia. To this end, we introduce TemporalWiki, a lifelong benchmark for ever-evolving LMs that utilizes the difference between consecutive snapshots of English Wikipedia and English Wikidata for training and evaluation, respectively. The benchmark hence allows researchers to periodically track an LM's ability to retain previous knowledge and acquire updated/new knowledge at each point in time. We also find that training an LM on the diff data through continual learning methods achieves similar or better perplexity than on the entire snapshot in our benchmark with 12 times less computational cost, which verifies that factual knowledge in LMs can be safely updated with minimal training data via continual learning. The dataset and the code are available at https://github.com/joeljang/temporalwiki .

READ FULL TEXT
research
06/18/2022

NISPA: Neuro-Inspired Stability-Plasticity Adaptation for Continual Learning in Sparse Networks

The goal of continual learning (CL) is to learn different tasks over tim...
research
07/05/2023

Exploring Continual Learning for Code Generation Models

Large-scale code generation models such as Codex and CodeT5 have achieve...
research
08/20/2021

Online Continual Learning with Natural Distribution Shifts: An Empirical Study with Visual Data

Continual learning is the problem of learning and retaining knowledge th...
research
10/07/2021

Towards Continual Knowledge Learning of Language Models

Large Language Models (LMs) are known to encode world knowledge in their...
research
06/15/2023

KoLA: Carefully Benchmarking World Knowledge of Large Language Models

The unprecedented performance of large language models (LLMs) necessitat...
research
10/13/2022

Is It Worth the (Environmental) Cost? Limited Evidence for the Benefits of Diachronic Continuous Training

Language is constantly changing and evolving, leaving language models to...
research
08/14/2023

CTP: Towards Vision-Language Continual Pretraining via Compatible Momentum Contrast and Topology Preservation

Vision-Language Pretraining (VLP) has shown impressive results on divers...

Please sign up or login with your details

Forgot password? Click here to reset