Generation Probabilities Are Not Enough: Exploring the Effectiveness of Uncertainty Highlighting in AI-Powered Code Completions

02/14/2023
by   Helena Vasconcelos, et al.
0

Large-scale generative models enabled the development of AI-powered code completion tools to assist programmers in writing code. However, much like other AI-powered tools, AI-powered code completions are not always accurate, potentially introducing bugs or even security vulnerabilities into code if not properly detected and corrected by a human programmer. One technique that has been proposed and implemented to help programmers identify potential errors is to highlight uncertain tokens. However, there have been no empirical studies exploring the effectiveness of this technique– nor investigating the different and not-yet-agreed-upon notions of uncertainty in the context of generative models. We explore the question of whether conveying information about uncertainty enables programmers to more quickly and accurately produce code when collaborating with an AI-powered code completion tool, and if so, what measure of uncertainty best fits programmers' needs. Through a mixed-methods study with 30 programmers, we compare three conditions: providing the AI system's code completion alone, highlighting tokens with the lowest likelihood of being generated by the underlying generative model, and highlighting tokens with the highest predicted likelihood of being edited by a programmer. We find that highlighting tokens with the highest predicted likelihood of being edited leads to faster task completion and more targeted edits, and is subjectively preferred by study participants. In contrast, highlighting tokens according to their probability of being generated does not provide any benefit over the baseline with no highlighting. We further explore the design space of how to convey uncertainty in AI-powered code completion tools, and find that programmers prefer highlights that are granular, informative, interpretable, and not overwhelming.

READ FULL TEXT

page 10

page 13

page 32

page 33

research
01/26/2023

On the Design of AI-powered Code Assistants for Notebooks

AI-powered code assistants, such as Copilot, are quickly becoming a ubiq...
research
05/18/2023

Investigating and Designing for Trust in AI-powered Code Generation Tools

As AI-powered code generation tools such as GitHub Copilot become popula...
research
06/18/2021

Learning to Generate Code Sketches

Traditional generative models are limited to predicting sequences of ter...
research
02/10/2022

Investigating Explainability of Generative AI for Code through Scenario-based Design

What does it mean for a generative AI model to be explainable? The emerg...
research
03/01/2023

R-U-SURE? Uncertainty-Aware Code Suggestions By Maximizing Utility Across Random User Intents

Large language models show impressive results at predicting structured t...
research
03/12/2021

An Empirical Study on the Usage of BERT Models for Code Completion

Code completion is one of the main features of modern Integrated Develop...
research
04/08/2021

Perfection Not Required? Human-AI Partnerships in Code Translation

Generative models have become adept at producing artifacts such as image...

Please sign up or login with your details

Forgot password? Click here to reset