Exploring the Robustness of Large Language Models for Solving Programming Problems

06/26/2023
by   Atsushi Shirafuji, et al.
0

Using large language models (LLMs) for source code has recently gained attention. LLMs, such as Transformer-based models like Codex and ChatGPT, have been shown to be highly capable of solving a wide range of programming problems. However, the extent to which LLMs understand problem descriptions and generate programs accordingly or just retrieve source code from the most relevant problem in training data based on superficial cues has not been discovered yet. To explore this research question, we conduct experiments to understand the robustness of several popular LLMs, CodeGen and GPT-3.5 series models, capable of tackling code generation tasks in introductory programming problems. Our experimental results show that CodeGen and Codex are sensitive to the superficial modifications of problem descriptions and significantly impact code generation performance. Furthermore, we observe that Codex relies on variable names, as randomized variables decrease the solved rate significantly. However, the state-of-the-art (SOTA) models, such as InstructGPT and ChatGPT, show higher robustness to superficial modifications and have an outstanding capability for solving programming problems. This highlights the fact that slight modifications to the prompts given to the LLMs can greatly affect code generation performance, and careful formatting of prompts is essential for high-quality code generation, while the SOTA models are becoming more robust to perturbations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/18/2023

Think Outside the Code: Brainstorming Boosts Large Language Models in Code Generation

Code generation aims to automatically generate source code from high-lev...
research
07/31/2020

Language Modelling for Source Code with Transformer-XL

It has been found that software, like natural language texts, exhibits "...
research
04/24/2023

Is ChatGPT the Ultimate Programming Assistant – How far is it?

The recent progress in generative AI techniques has significantly influe...
research
02/08/2021

Evaluating the robustness of source code plagiarism detection tools to pervasive plagiarism-hiding modifications

Source code plagiarism is a common occurrence in undergraduate computer ...
research
05/24/2023

The Larger They Are, the Harder They Fail: Language Models do not Recognize Identifier Swaps in Python

Large Language Models (LLMs) have successfully been applied to code gene...
research
02/08/2022

Competition-Level Code Generation with AlphaCode

Programming is a powerful and ubiquitous problem-solving tool. Developin...
research
05/24/2023

Is GPT-4 a Good Data Analyst?

As large language models (LLMs) have demonstrated their powerful capabil...

Please sign up or login with your details

Forgot password? Click here to reset