On the Importance of Building High-quality Training Datasets for Neural Code Search

02/14/2022
by   Zhensu Sun, et al.
0

The performance of neural code search is significantly influenced by the quality of the training data from which the neural models are derived. A large corpus of high-quality query and code pairs is demanded to establish a precise mapping from the natural language to the programming language. Due to the limited availability, most widely-used code search datasets are established with compromise, such as using code comments as a replacement of queries. Our empirical study on a famous code search dataset reveals that over one-third of its queries contain noises that make them deviate from natural user queries. Models trained through noisy data are faced with severe performance degradation when applied in real-world scenarios. To improve the dataset quality and make the queries of its samples semantically identical to real user queries is critical for the practical usability of neural code search. In this paper, we propose a data cleaning framework consisting of two subsequent filters: a rule-based syntactic filter and a model-based semantic filter. This is the first framework that applies semantic query cleaning to code search datasets. Experimentally, we evaluated the effectiveness of our framework on two widely-used code search models and three manually-annotated code retrieval benchmarks. Training the popular DeepCS model with the filtered dataset from our framework improves its performance by 19.2 average with the three validation benchmarks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/26/2019

Neural Code Search Evaluation Dataset

There has been an increase of interest in code search using natural lang...
research
06/27/2023

Constructing Multilingual Code Search Dataset Using Neural Machine Translation

Code search is a task to find programming codes that semantically match ...
research
04/07/2022

Enhancing Semantic Code Search with Multimodal Contrastive Learning and Soft Data Augmentation

Code search aims to retrieve the most semantically relevant code snippet...
research
04/16/2021

BERT2Code: Can Pretrained Language Models be Leveraged for Code Search?

Millions of repetitive code snippets are submitted to code repositories ...
research
07/12/2022

Are We Building on the Rock? On the Importance of Data Preprocessing for Code Summarization

Code summarization, the task of generating useful comments given the cod...
research
03/12/2022

A Proposal to Study "Is High Quality Data All We Need?"

Even though deep neural models have achieved superhuman performance on m...
research
08/28/2023

CodeMark: Imperceptible Watermarking for Code Datasets against Neural Code Completion Models

Code datasets are of immense value for training neural-network-based cod...

Please sign up or login with your details

Forgot password? Click here to reset