On the Usage of Continual Learning for Out-of-Distribution Generalization in Pre-trained Language Models of Code

05/06/2023
by   Martin Weyssow, et al.
5

Pre-trained language models (PLMs) have become a prevalent technique in deep learning for code, utilizing a two-stage pre-training and fine-tuning procedure to acquire general knowledge about code and specialize in a variety of downstream tasks. However, the dynamic nature of software codebases poses a challenge to the effectiveness and robustness of PLMs. In particular, world-realistic scenarios potentially lead to significant differences between the distribution of the pre-training and test data, i.e., distribution shift, resulting in a degradation of the PLM's performance on downstream tasks. In this paper, we stress the need for adapting PLMs of code to software data whose distribution changes over time, a crucial problem that has been overlooked in previous works. The motivation of this work is to consider the PLM in a non-stationary environment, where fine-tuning data evolves over time according to a software evolution scenario. Specifically, we design a scenario where the model needs to learn from a stream of programs containing new, unseen APIs over time. We study two widely used PLM architectures, i.e., a GPT2 decoder and a RoBERTa encoder, on two downstream tasks, API call and API usage prediction. We demonstrate that the most commonly used fine-tuning technique from prior work is not robust enough to handle the dynamic nature of APIs, leading to the loss of previously acquired knowledge i.e., catastrophic forgetting. To address these issues, we implement five continual learning approaches, including replay-based and regularization-based methods. Our findings demonstrate that utilizing these straightforward methods effectively mitigates catastrophic forgetting in PLMs across both downstream tasks while achieving comparable or superior performance.

READ FULL TEXT
research
05/19/2022

Continual Pre-Training Mitigates Forgetting in Language and Vision

Pre-trained models are nowadays a fundamental component of machine learn...
research
06/21/2023

Continual Learners are Incremental Model Generalizers

Motivated by the efficiency and rapid convergence of pre-trained models ...
research
05/10/2023

Investigating Forgetting in Pre-Trained Representations Through Continual Learning

Representation forgetting refers to the drift of contextualized represen...
research
05/20/2023

Lifelong Language Pretraining with Distribution-Specialized Experts

Pretraining on a large-scale corpus has become a standard method to buil...
research
05/30/2023

History Repeats: Overcoming Catastrophic Forgetting For Event-Centric Temporal Knowledge Graph Completion

Temporal knowledge graph (TKG) completion models typically rely on havin...
research
08/20/2023

How Good Are Large Language Models at Out-of-Distribution Detection?

Out-of-distribution (OOD) detection plays a vital role in enhancing the ...
research
04/30/2023

Reliable Gradient-free and Likelihood-free Prompt Tuning

Due to privacy or commercial constraints, large pre-trained language mod...

Please sign up or login with your details

Forgot password? Click here to reset