Constructing Effective In-Context Demonstration for Code Intelligence Tasks: An Empirical Study
Pre-trained models of code have gained widespread popularity in many code intelligence tasks. Recently, with the scaling of the model and corpus size, large language models have shown the ability of in-context learning. These models employ task instructions and a few demonstration examples as prompts to learn the semantics of the task and make predictions for test samples. This new learning paradigm is training-free and has shown impressive performance in various natural language processing and code intelligence tasks. However, the performance of in-context learning heavily relies on the quality of demonstration, and there has been no systematic investigation into how to construct a good demonstration for code-related tasks with in-context learning. In this paper, by analyzing the design space of in-context demonstration, we empirically explore the impact of three key factors on the performance of in-context learning in code intelligence tasks: the selection of demonstration examples, the order of demonstration examples, and the number of demonstration examples. We conduct extensive experiments on three code intelligence tasks including bug fixing, code summarization, and program synthesis. Our experimental results demonstrate that all the above three factors dramatically impact the performance of in-context learning in code intelligence tasks. Additionally, we summarize our findings and provide takeaway suggestions on how to construct effective demonstrations, taking into account these three perspectives. We show that a well-constructed demonstration can lead to significant improvements over simple demonstrations and previous fine-tuned state-of-the-art models, e.g., improving EM, BLEU-4, and EM by at least 11.91 36.88 respectively.
READ FULL TEXT