Which Features are Learned by CodeBert: An Empirical Study of the BERT-based Source Code Representation Learning

01/20/2023
by   Lan Zhang, et al.
1

The Bidirectional Encoder Representations from Transformers (BERT) were proposed in the natural language process (NLP) and shows promising results. Recently researchers applied the BERT to source-code representation learning and reported some good news on several downstream tasks. However, in this paper, we illustrated that current methods cannot effectively understand the logic of source codes. The representation of source code heavily relies on the programmer-defined variable and function names. We design and implement a set of experiments to demonstrate our conjecture and provide some insights for future works.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/15/2020

Empirical Study of Transformers for Source Code

Initially developed for natural language processing (NLP), Transformers ...
research
10/11/2022

COMBO: Pre-Training Representations of Binary Code Using Contrastive Learning

Compiled software is delivered as executable binary code. Developers wri...
research
12/21/2019

Pre-trained Contextual Embedding of Source Code

The source code of a program not only serves as a formal description of ...
research
10/23/2020

Neural Code Completion with Anonymized Variable Names

Source code processing heavily relies on the methods widely used in natu...
research
10/11/2019

Evaluating Semantic Representations of Source Code

Learned representations of source code enable various software developer...
research
06/13/2022

MetaTPTrans: A Meta Learning Approach for Multilingual Code Representation Learning

Representation learning of source code is essential for applying machine...
research
03/23/2021

Variable Name Recovery in Decompiled Binary Code using Constrained Masked Language Modeling

Decompilation is the procedure of transforming binary programs into a hi...

Please sign up or login with your details

Forgot password? Click here to reset