What Do They Capture? – A Structural Analysis of Pre-Trained Language Models for Source Code

02/14/2022
by   Yao Wan, et al.
0

Recently, many pre-trained language models for source code have been proposed to model the context of code and serve as a basis for downstream code intelligence tasks such as code completion, code search, and code summarization. These models leverage masked pre-training and Transformer and have achieved promising results. However, currently there is still little progress regarding interpretability of existing pre-trained code models. It is not clear why these models work and what feature correlations they can capture. In this paper, we conduct a thorough structural analysis aiming to provide an interpretation of pre-trained language models for source code (e.g., CodeBERT, and GraphCodeBERT) from three distinctive perspectives: (1) attention analysis, (2) probing on the word embedding, and (3) syntax tree induction. Through comprehensive analysis, this paper reveals several insightful findings that may inspire future studies: (1) Attention aligns strongly with the syntax structure of code. (2) Pre-training language models of code can preserve the syntax structure of code in the intermediate representations of each Transformer layer. (3) The pre-trained models of code have the ability of inducing syntax trees of code. Theses findings suggest that it may be helpful to incorporate the syntax structure of code into the process of pre-training for better code representations.

READ FULL TEXT

page 3

page 8

page 10

research
09/17/2020

GraphCodeBERT: Pre-training Code Representations with Data Flow

Pre-trained models for programming language have achieved dramatic empir...
research
07/18/2022

What does Transformer learn about source code?

In the field of source code processing, the transformer-based representa...
research
05/26/2021

TreeBERT: A Tree-Based Pre-Trained Model for Programming Language

Source code can be parsed into the abstract syntax tree (AST) based on d...
research
01/03/2023

Language Models are Drummers: Drum Composition with Natural Language Pre-Training

Automatic music generation with artificial intelligence typically requir...
research
06/01/2021

On using distributed representations of source code for the detection of C security vulnerabilities

This paper presents an evaluation of the code representation model Code2...
research
03/10/2023

Model-Agnostic Syntactical Information for Pre-Trained Programming Language Models

Pre-trained Programming Language Models (PPLMs) achieved many recent sta...
research
01/20/2022

AstBERT: Enabling Language Model for Code Understanding with Abstract Syntax Tree

Using a pre-trained language model (i.e. BERT) to apprehend source codes...

Please sign up or login with your details

Forgot password? Click here to reset