InferCode: Self-Supervised Learning of Code Representations by Predicting Subtrees

12/13/2020
by   Nghi D. Q. Bui, et al.
0

Building deep learning models on source code has found many successful software engineering applications, such as code search, code comment generation, bug detection, code migration, and so on. Current learning techniques, however, have a major drawback that these models are mostly trained on datasets labeled for particular downstream tasks, and code representations may not be suitable for other tasks. While some techniques produce representations from unlabeled code, they are far from satisfactory when applied to downstream tasks. Although certain techniques generate representations from unlabeled code when applied to downstream tasks they are far from satisfactory. This paper proposes InferCode to overcome the limitation by adapting the self-supervised learning mechanism to build source code model. The key novelty lies in training code representations by predicting automatically identified subtrees from the context of the ASTs. Subtrees in ASTs are treated with InferCode as the labels for training code representations without any human labeling effort or the overhead of expensive graph construction, and the trained representations are no longer tied to any specific downstream tasks or code units. We trained an InferCode model instance using the Tree-based CNN as the encoder of a large set of Java code and applied it to downstream unsupervised tasks such as code clustering, code clone detection, cross-language code search or reused under a transfer learning scheme to continue training the model weights for supervised tasks such as code classification and method name prediction. Compared to previous code learning techniques applied to the same downstream tasks, such as Code2Vec, Code2Seq, ASTNN, higher performance results are achieved using our pre-trained InferCode model with a significant margin for most tasks including those involving different programming languages.

READ FULL TEXT

page 1

page 10

research
04/19/2022

On The Cross-Modal Transfer from Natural Language to Code through Adapter Modules

Pre-trained neural Language Models (PTLM), such as CodeBERT, are recentl...
research
02/15/2021

DOBF: A Deobfuscation Pre-Training Objective for Programming Languages

Recent advances in self-supervised learning have dramatically improved t...
research
03/13/2023

xASTNN: Improved Code Representations for Industrial Practice

The application of deep learning techniques in software engineering beco...
research
10/24/2021

Understanding the World Through Action

The recent history of machine learning research has taught us that machi...
research
10/06/2021

Semantic Prediction: Which One Should Come First, Recognition or Prediction?

The ultimate goal of video prediction is not forecasting future pixel-va...
research
08/18/2023

Learning Representations on Logs for AIOps

AI for IT Operations (AIOps) is a powerful platform that Site Reliabilit...
research
06/10/2021

Automated Self-Supervised Learning for Graphs

Graph self-supervised learning has gained increasing attention due to it...

Please sign up or login with your details

Forgot password? Click here to reset