InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback

06/26/2023
by   John Yang, et al.
0

Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create two interactive code environments with Bash and SQL as action spaces, leveraging data from the static Spider and NL2Bash datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to incorporate new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io

READ FULL TEXT

page 18

page 28

research
08/14/2023

OctoPack: Instruction Tuning Code Large Language Models

Finetuning large language models (LLMs) on instructions leads to vast pe...
research
01/31/2023

Execution-based Code Generation using Deep Reinforcement Learning

The utilization of programming language (PL) models, pretrained on large...
research
07/10/2023

RLTF: Reinforcement Learning from Unit Test Feedback

The goal of program synthesis, or code generation, is to generate execut...
research
12/19/2022

Natural Language to Code Generation in Interactive Data Science Notebooks

Computational notebooks, such as Jupyter notebooks, are interactive comp...
research
08/03/2023

ClassEval: A Manually-Crafted Benchmark for Evaluating LLMs on Class-level Code Generation

In this work, we make the first attempt to evaluate LLMs in a more chall...
research
11/20/2020

Bridging Scene Understanding and Task Execution with Flexible Simulation Environments

Significant progress has been made in scene understanding which seeks to...
research
03/28/2019

Building Automated Survey Coders via Interactive Machine Learning

Software systems trained via machine learning to automatically classify ...

Please sign up or login with your details

Forgot password? Click here to reset