DeepAI AI Chat
Log In Sign Up

Towards Full-line Code Completion with Neural Language Models

by   Wenhan Wang, et al.

A code completion system suggests future code elements to developers given a partially-complete code snippet. Code completion is one of the most useful features in Integrated Development Environments (IDEs). Currently, most code completion techniques predict a single token at a time. In this paper, we take a further step and discuss the probability of directly completing a whole line of code instead of a single token. We believe suggesting longer code sequences can further improve the efficiency of developers. Recently neural language models have been adopted as a preferred approach for code completion, and we believe these models can still be applied to full-line code completion with a few improvements. We conduct our experiments on two real-world python corpora and evaluate existing neural models based on source code tokens or syntactical actions. The results show that neural language models can achieve acceptable results on our tasks, with significant room for improvements.


Non-autoregressive Model for Full-line Code Completion

Code completion tools are frequently used by software developers to acce...

Learning Autocompletion from Real-World Datasets

Code completion is a popular software development tool integrated into a...

Toward Less Hidden Cost of Code Completion with Acceptance and Ranking Models

Code completion is widely used by software developers to provide coding ...

CodeMark: Imperceptible Watermarking for Code Datasets against Neural Code Completion Models

Code datasets are of immense value for training neural-network-based cod...

Learning to Prevent Profitless Neural Code Completion

Currently, large pre-trained models are widely applied in neural code co...

Learning to Format Coq Code Using Language Models

Should the final right bracket in a record declaration be on a separate ...

Enriching Source Code with Contextual Data for Code Completion Models: An Empirical Study

Transformer-based pre-trained models have recently achieved great result...