Explaining Competitive-Level Programming Solutions using LLMs

07/11/2023
by   Jierui Li, et al.
0

In this paper, we approach competitive-level programming problem-solving as a composite task of reasoning and code generation. We propose a novel method to automatically annotate natural language explanations to <problem, solution> pairs. We show that despite poor performance in solving competitive-level programming problems, state-of-the-art LLMs exhibit a strong capacity in describing and explaining solutions. Our explanation generation methodology can generate a structured solution explanation for the problem containing descriptions and analysis. To evaluate the quality of the annotated explanations, we examine their effectiveness in two aspects: 1) satisfying the human programming expert who authored the oracle solution, and 2) aiding LLMs in solving problems more effectively. The experimental results on the CodeContests dataset demonstrate that while LLM GPT3.5's and GPT-4's abilities in describing the solution are comparable, GPT-4 shows a better understanding of the key idea behind the solution.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/18/2019

Recommending Comprehensive Solutions for Programming Tasks by Mining Crowd Knowledge

Developers often search for relevant code examples on the web for their ...
research
02/08/2022

Competition-Level Code Generation with AlphaCode

Programming is a powerful and ubiquitous problem-solving tool. Developin...
research
02/21/2023

Parallel Sentence-Level Explanation Generation for Real-World Low-Resource Scenarios

In order to reveal the rationale behind model predictions, many works ha...
research
07/23/2023

A Bilevel Formalism for the Peer-Reviewing Problem

Due to the large number of submissions that more and more conferences ex...
research
05/24/2021

Deep Descriptive Clustering

Recent work on explainable clustering allows describing clusters when th...
research
04/30/2021

Using Small MUSes to Explain How to Solve Pen and Paper Puzzles

Pen and paper puzzles like Sudoku, Futoshiki and Skyscrapers are hugely ...
research
05/31/2023

Majority Rule: better patching via Self-Consistency

Large Language models (LLMs) can be induced to solve non-trivial problem...

Please sign up or login with your details

Forgot password? Click here to reset