Evaluation Methodologies for Code Learning Tasks

08/22/2021
by   Pengyu Nie, et al.
0

There has been a growing interest in developing machine learning (ML) models for code learning tasks, e.g., comment generation and method naming. Despite substantial increase in the effectiveness of ML models, the evaluation methodologies, i.e., the way people split datasets into training, validation, and testing sets, were not well designed. Specifically, no prior work on the aforementioned topics considered the timestamps of code and comments during evaluation (e.g., examples in the testing set might be from 2010 and examples from the training set might be from 2020). This may lead to evaluations that are inconsistent with the intended use cases of the ML models. In this paper, we formalize a novel time-segmented evaluation methodology, as well as the two methodologies commonly used in the literature: mixed-project and cross-project. We argue that time-segmented methodology is the most realistic. We also describe various use cases of ML models and provide a guideline for using methodologies to evaluate each use case. To assess the impact of methodologies, we collect a dataset of code-comment pairs with timestamps to train and evaluate several recent code learning ML models for the comment generation and method naming tasks. Our results show that different methodologies can lead to conflicting and inconsistent results. We invite the community to adopt the time-segmented evaluation methodology.

READ FULL TEXT

page 1

page 5

research
12/13/2021

On the Value of ML Models

We argue that, when establishing and benchmarking Machine Learning (ML) ...
research
03/21/2023

Reasonable Scale Machine Learning with Open-Source Metaflow

As Machine Learning (ML) gains adoption across industries and new use ca...
research
05/16/2023

The Good, the Bad, and the Missing: Neural Code Generation for Machine Learning Tasks

Machine learning (ML) has been increasingly used in a variety of domains...
research
11/16/2022

XRBench: An Extended Reality (XR) Machine Learning Benchmark Suite for the Metaverse

Real-time multi-model multi-task (MMMT) workloads, a new form of deep le...
research
12/15/2020

Run, Forest, Run? On Randomization and Reproducibility in Predictive Software Engineering

Machine learning (ML) has been widely used in the literature to automate...
research
01/15/2020

Overly Optimistic Prediction Results on Imbalanced Data: Flaws and Benefits of Applying Over-sampling

Information extracted from electrohysterography recordings could potenti...
research
12/09/2020

Skillearn: Machine Learning Inspired by Humans' Learning Skills

Humans, as the most powerful learners on the planet, have accumulated a ...

Please sign up or login with your details

Forgot password? Click here to reset