Extracting Meaningful Attention on Source Code: An Empirical Study of Developer and Neural Model Code Exploration

10/11/2022
by   Matteo Paltenghi, et al.
0

The high effectiveness of neural models of code, such as OpenAI Codex and AlphaCode, suggests coding capabilities of models that are at least comparable to those of humans. However, previous work has only used these models for their raw completion, ignoring how the model reasoning, in the form of attention weights, can be used for other downstream tasks. Disregarding the attention weights means discarding a considerable portion of what those models compute when queried. To profit more from the knowledge embedded in these large pre-trained models, this work compares multiple approaches to post-process these valuable attention weights for supporting code exploration. Specifically, we compare to which extent the transformed attention signal of CodeGen, a large and publicly available pretrained neural model, agrees with how developers look at and explore code when each answering the same sense-making questions about code. At the core of our experimental evaluation, we collect, manually annotate, and open-source a novel eye-tracking dataset comprising 25 developers answering sense-making questions on code over 92 sessions. We empirically evaluate five attention-agnostic heuristics and ten attention-based post processing approaches of the attention signal against our ground truth of developers exploring code, including the novel concept of follow-up attention which exhibits the highest agreement. Beyond the dataset contribution and the empirical study, we also introduce a novel practical application of the attention signal of pre-trained models with completely analytical solutions, going beyond how neural models' attention mechanisms have traditionally been used.

READ FULL TEXT
research
06/29/2022

Diet Code is Healthy: Simplifying Programs for Pre-Trained Models of Code

Pre-trained code representation models such as CodeBERT have demonstrate...
research
08/25/2021

What do pre-trained code models know about code?

Pre-trained models of code built on the transformer architecture have pe...
research
05/12/2023

Where to Look When Repairing Code? Comparing the Attention of Neural Models and Developers

Neural network-based techniques for automated program repair are becomin...
research
06/17/2022

Evaluation of Contrastive Learning with Various Code Representations for Code Clone Detection

Code clones are pairs of code snippets that implement similar functional...
research
05/18/2023

CCT5: A Code-Change-Oriented Pre-Trained Model

Software is constantly changing, requiring developers to perform several...
research
04/24/2023

Enriching Source Code with Contextual Data for Code Completion Models: An Empirical Study

Transformer-based pre-trained models have recently achieved great result...
research
10/06/2021

Towards Heuristics for Supporting the Validation of Code Smells

The identification of code smells is largely recognized as a subjective ...

Please sign up or login with your details

Forgot password? Click here to reset