An Exploratory Literature Study on Sharing and Energy Use of Language Models for Source Code

07/05/2023
by   Max Hort, et al.
0

Large language models trained on source code can support a variety of software development tasks, such as code recommendation and program repair. Large amounts of data for training such models benefit the models' performance. However, the size of the data and models results in long training times and high energy consumption. While publishing source code allows for replicability, users need to repeat the expensive training process if models are not shared. The main goal of the study is to investigate if publications that trained language models for software engineering (SE) tasks share source code and trained artifacts. The second goal is to analyze the transparency on training energy usage. We perform a snowballing-based literature search to find publications on language models for source code, and analyze their reusability from a sustainability standpoint. From 494 unique publications, we identified 293 relevant publications that use language models to address code-related tasks. Among them, 27 293) make artifacts available for reuse. This can be in the form of tools or IDE plugins designed for specific tasks or task-agnostic models that can be fine-tuned for a variety of downstream tasks. Moreover, we collect insights on the hardware used for model training, as well as training time, which together determine the energy consumption of the development process. We find that there are deficiencies in the sharing of information and artifacts for current studies on source code models for software engineering tasks, with 40 surveyed papers not sharing source code or trained artifacts. We recommend the sharing of source code as well as trained artifacts, to enable sustainable reproducibility. Moreover, comprehensive information on training times and hardware configurations should be shared for transparency on a model's carbon footprint.

READ FULL TEXT

page 1

page 5

page 6

research
04/28/2020

SCELMo: Source Code Embeddings from Language Models

Continuous embeddings of tokens in computer programs have been used to s...
research
07/18/2022

Software Artifact Mining in Software Engineering Conferences: A Meta-Analysis

Background: Software development results in the production of various ty...
research
12/12/2022

A Pre-Trained BERT Model for Android Applications

The automation of an increasingly large number of software engineering t...
research
09/10/2019

An Evalutation of Programming Language Models' performance on Software Defect Detection

This dissertation presents an evaluation of several language models on s...
research
06/13/2022

MetaTPTrans: A Meta Learning Approach for Multilingual Code Representation Learning

Representation learning of source code is essential for applying machine...
research
04/02/2023

LLMMaps – A Visual Metaphor for Stratified Evaluation of Large Language Models

Large Language Models (LLMs) have revolutionized natural language proces...
research
09/26/2022

Towards Data-driven GIM tools: Two Prototypes

Here we describe two approaches to improve group information management ...

Please sign up or login with your details

Forgot password? Click here to reset