Improve Language Modelling for Code Completion through Statement Level Language Model based on Statement Embedding Generated by BiLSTM

09/25/2019
by   Yixiao Yang, et al.
0

Language models such as RNN, LSTM or other variants have been widely used as generative models in natural language processing. In last few years, taking source code as natural languages, parsing source code into a token sequence and using a language model such as LSTM to train that sequence are state-of-art methods to get a generative model for solving the problem of code completion. However, for source code with hundreds of statements, traditional LSTM model or attention-based LSTM model failed to capture the long term dependency of source code. In this paper, we propose a novel statement-level language model (SLM) which uses BiLSTM to generate the embedding for each statement. The standard LSTM is adopted in SLM to iterate and accumulate the embedding of each statement in context to help predict next code. The statement level attention mechanism is also adopted in the model. The proposed model SLM is aimed at token level code completion. The experiments on inner-project and cross-project data sets indicate that the newly proposed statement-level language model with attention mechanism (SLM) outperforms all other state-of-art models in token level code completion task.

READ FULL TEXT
research
03/18/2020

Improving the Robustness to Data Inconsistency between Training and Testing for Code Completion by Hierarchical Language Model

In the field of software engineering, applying language models to the to...
research
11/27/2017

Code Completion with Neural Attention and Pointer Networks

Intelligent code completion has become an essential tool to accelerate m...
research
04/10/2020

Sequence Model Design for Code Completion in the Modern IDE

Code completion plays a prominent role in modern integrated development ...
research
11/18/2019

Combining Program Analysis and Statistical Language Model for Code Statement Completion

Automatic code completion helps improve developers' productivity in thei...
research
08/01/2023

CodeBPE: Investigating Subtokenization Options for Large Language Model Pretraining on Source Code

Recent works have widely adopted large language model pretraining for so...
research
06/09/2021

Energy-Based Models for Code Generation under Compilability Constraints

Neural language models can be successfully trained on source code, leadi...
research
04/14/2020

Code Completion using Neural Attention and Byte Pair Encoding

In this paper, we aim to do code completion based on implementing a Neur...

Please sign up or login with your details

Forgot password? Click here to reset