Pop Quiz! Do Pre-trained Code Models Possess Knowledge of Correct API Names?

09/14/2023
by   Terry Yue Zhuo, et al.
0

Recent breakthroughs in pre-trained code models, such as CodeBERT and Codex, have shown their superior performance in various downstream tasks. The correctness and unambiguity of API usage among these code models are crucial for achieving desirable program functionalities, requiring them to learn various API fully qualified names structurally and semantically. Recent studies reveal that even state-of-the-art pre-trained code models struggle with suggesting the correct APIs during code generation. However, the reasons for such poor API usage performance are barely investigated. To address this challenge, we propose using knowledge probing as a means of interpreting code models, which uses cloze-style tests to measure the knowledge stored in models. Our comprehensive study examines a code model's capability of understanding API fully qualified names from two different perspectives: API call and API import. Specifically, we reveal that current code models struggle with understanding API names, with pre-training strategies significantly affecting the quality of API name learning. We demonstrate that natural language context can assist code models in locating Python API names and generalize Python API name knowledge to unseen data. Our findings provide insights into the limitations and capabilities of current pre-trained code models, and suggest that incorporating API structure into the pre-training process can improve automated API usage and code representations. This work provides significance for advancing code intelligence practices and direction for future studies. All experiment results, data and source code used in this work are available at <https://doi.org/10.5281/zenodo.7902072>.

READ FULL TEXT

page 1

page 4

page 9

research
01/05/2022

SPT-Code: Sequence-to-Sequence Pre-Training for Learning Source Code Representations

Recent years have seen the successful application of large pre-trained m...
research
02/15/2021

DOBF: A Deobfuscation Pre-Training Objective for Programming Languages

Recent advances in self-supervised learning have dramatically improved t...
research
08/10/2022

Prompt-tuned Code Language Model as a Neural Knowledge Base for Type Inference in Statically-Typed Partial Code

Partial code usually involves non-fully-qualified type names (non-FQNs) ...
research
05/03/2022

Deep API Learning Revisited

Understanding the correct API usage sequences is one of the most importa...
research
09/07/2017

Resolving API Mentions in Informal Documents

Developer forums contain opinions and information related to the usage o...
research
05/06/2023

ToolCoder: Teach Code Generation Models to use API search tools

Automatically generating source code from natural language descriptions ...
research
12/05/2021

VarCLR: Variable Semantic Representation Pre-training via Contrastive Learning

Variable names are critical for conveying intended program behavior. Mac...

Please sign up or login with your details

Forgot password? Click here to reset