Better Language Models of Code through Self-Improvement

04/02/2023
by   Hung Quoc To, et al.
0

Pre-trained language models for code (PLMCs) have gained attention in recent research. These models are pre-trained on large-scale datasets using multi-modal objectives. However, fine-tuning them requires extensive supervision and is limited by the size of the dataset provided. We aim to improve this issue by proposing a simple data augmentation framework. Our framework utilizes knowledge gained during the pre-training and fine-tuning stage to generate pseudo data, which is then used as training data for the next step. We incorporate this framework into the state-of-the-art language models, such as CodeT5, CodeBERT, and UnixCoder. The results show that our framework significantly improves PLMCs' performance in code-related sequence generation tasks, such as code summarization and code generation in the CodeXGLUE benchmark.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/27/2023

PanGu-Coder2: Boosting Large Language Models for Code with Ranking Feedback

Large Language Models for Code (Code LLM) are flourishing. New and power...
research
05/03/2023

TempoSum: Evaluating the Temporal Generalization of Abstractive Summarization

Recent pre-trained language models (PLMs) achieve promising results in e...
research
07/19/2023

Improving Pre-trained Language Models' Generalization

The reusability of state-of-the-art Pre-trained Language Models (PLMs) i...
research
04/24/2023

Enriching Source Code with Contextual Data for Code Completion Models: An Empirical Study

Transformer-based pre-trained models have recently achieved great result...
research
05/15/2023

Sensitivity and Robustness of Large Language Models to Prompt in Japanese

Prompt Engineering has gained significant relevance in recent years, fue...
research
10/15/2021

Training Dynamics for Text Summarization Models

Pre-trained language models (e.g. BART) have shown impressive results wh...
research
12/01/2021

Controlling Conditional Language Models with Distributional Policy Gradients

Machine learning is shifting towards general-purpose pretrained generati...

Please sign up or login with your details

Forgot password? Click here to reset