Sequence Model Design for Code Completion in the Modern IDE

04/10/2020
by   Gareth Ari Aye, et al.
0

Code completion plays a prominent role in modern integrated development environments (IDEs). Machine learning has become ubiquitous in analogous natural language writing and search software, surfacing more relevant autocompletions and search suggestions in fewer keystrokes. Prior research has reported training high-accuracy, deep neural networks for modeling source code, but little attention has been given to the practical constraints imposed by interactive developer tools. In particular, neural language models for source code modeling like the one described in Maybe Deep Neural Networks are the Best Choice for Modeling Source Code are framed around code completion, but only report accuracy of next-token prediction. However, in order for a language model (LM) to work well within real-world code completion systems, it must also always make suggestions that produce valid code that typechecks to support code completion's role in correctness-checking; return instantaneous results to help programmers code more efficiently in fewer keystrokes; and be small enough to fit comfortably on disk and in memory on developer workstations, since virtually all modern IDEs run locally and support offline usage. To meet these additional requirements, we propose a novel design for predicting top-k next tokens that combines static analysis' ability to enumerate all valid keywords and in-scope identifiers with the ability of a language model to place a probability distribution over them. Our model mixes character-level input representation with token output to represent out-of-vocabulary (OOV) tokens meaningfully and minimize prediction latency. OOV tokens can be predicted through detection of local repetition common in software. This design achieves state-of-art accuracy in source code modeling and fits the constraints imposed by real-world code completion implementations in modern IDEs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/25/2019

Improve Language Modelling for Code Completion through Statement Level Language Model based on Statement Embedding Generated by BiLSTM

Language models such as RNN, LSTM or other variants have been widely use...
research
04/28/2020

Fast and Memory-Efficient Neural Code Completion

Code completion is one of the most widely used features of modern integr...
research
03/13/2019

Maybe Deep Neural Networks are the Best Choice for Modeling Source Code

Statistical language modeling techniques have successfully been applied ...
research
02/14/2022

CodeFill: Multi-token Code Completion by Jointly Learning from Structure and Naming Sequences

Code completion is an essential feature of IDEs, yet current autocomplet...
research
11/09/2020

Learning Autocompletion from Real-World Datasets

Code completion is a popular software development tool integrated into a...
research
07/22/2020

DeepClone: Modeling Clones to Generate Code Predictions

During software development, programmers often tend to reuse the code fo...
research
11/18/2019

Combining Program Analysis and Statistical Language Model for Code Statement Completion

Automatic code completion helps improve developers' productivity in thei...

Please sign up or login with your details

Forgot password? Click here to reset