Semiparametric Language Models Are Scalable Continual Learners

03/02/2023
by   Guangyue Peng, et al.
0

Semiparametric language models (LMs) have shown promise in continuously learning from new text data by combining a parameterized neural LM with a growable non-parametric memory for memorizing new content. However, conventional semiparametric LMs will finally become prohibitive for computing and storing if they are applied to continual learning over streaming data, because the non-parametric memory grows linearly with the amount of data they learn from over time. To address the issue of scalability, we present a simple and intuitive approach called Selective Memorization (SeMem), which only memorizes difficult samples that the model is likely to struggle with. We demonstrate that SeMem improves the scalability of semiparametric LMs for continual learning over streaming data in two ways: (1) data-wise scalability: as the model becomes stronger through continual learning, it will encounter fewer difficult cases that need to be memorized, causing the growth of the non-parametric memory to slow down over time rather than growing at a linear rate with the size of training data; (2) model-wise scalability: SeMem allows a larger model to memorize fewer samples than its smaller counterpart because it is rarer for a larger model to encounter incomprehensible cases, resulting in a non-parametric memory that does not scale linearly with model size. We conduct extensive experiments in language modeling and downstream tasks to test SeMem's results, showing SeMem enables a semiparametric LM to be a scalable continual learner with little forgetting.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/26/2021

Generative Kernel Continual learning

Kernel continual learning by <cit.> has recently emerged as a strong con...
research
08/23/2021

StreaMRAK a Streaming Multi-Resolution Adaptive Kernel Algorithm

Kernel ridge regression (KRR) is a popular scheme for non-linear non-par...
research
07/12/2021

Kernel Continual Learning

This paper introduces kernel continual learning, a simple but effective ...
research
09/24/2019

dAUTOMAP: decomposing AUTOMAP to achieve scalability and enhance performance

AUTOMAP is a promising generalized reconstruction approach, however, it ...
research
05/24/2022

Continual-T0: Progressively Instructing 50+ Tasks to Language Models Without Forgetting

Recent work on large language models relies on the intuition that most n...
research
11/09/2022

Continual learning autoencoder training for a particle-in-cell simulation via streaming

The upcoming exascale era will provide a new generation of physics simul...
research
11/04/2016

Bayesian Non-parametric model to Target Gamification Notifications Using Big Data

I suggest an approach that helps the online marketers to target their Ga...

Please sign up or login with your details

Forgot password? Click here to reset