Evaluation Methodologies for Code Learning Tasks

by   Pengyu Nie, et al.

There has been a growing interest in developing machine learning (ML) models for code learning tasks, e.g., comment generation and method naming. Despite substantial increase in the effectiveness of ML models, the evaluation methodologies, i.e., the way people split datasets into training, validation, and testing sets, were not well designed. Specifically, no prior work on the aforementioned topics considered the timestamps of code and comments during evaluation (e.g., examples in the testing set might be from 2010 and examples from the training set might be from 2020). This may lead to evaluations that are inconsistent with the intended use cases of the ML models. In this paper, we formalize a novel time-segmented evaluation methodology, as well as the two methodologies commonly used in the literature: mixed-project and cross-project. We argue that time-segmented methodology is the most realistic. We also describe various use cases of ML models and provide a guideline for using methodologies to evaluate each use case. To assess the impact of methodologies, we collect a dataset of code-comment pairs with timestamps to train and evaluate several recent code learning ML models for the comment generation and method naming tasks. Our results show that different methodologies can lead to conflicting and inconsistent results. We invite the community to adopt the time-segmented evaluation methodology.


page 1

page 5


On the Value of ML Models

We argue that, when establishing and benchmarking Machine Learning (ML) ...

Reasonable Scale Machine Learning with Open-Source Metaflow

As Machine Learning (ML) gains adoption across industries and new use ca...

The Good, the Bad, and the Missing: Neural Code Generation for Machine Learning Tasks

Machine learning (ML) has been increasingly used in a variety of domains...

XRBench: An Extended Reality (XR) Machine Learning Benchmark Suite for the Metaverse

Real-time multi-model multi-task (MMMT) workloads, a new form of deep le...

Run, Forest, Run? On Randomization and Reproducibility in Predictive Software Engineering

Machine learning (ML) has been widely used in the literature to automate...

Overly Optimistic Prediction Results on Imbalanced Data: Flaws and Benefits of Applying Over-sampling

Information extracted from electrohysterography recordings could potenti...

Skillearn: Machine Learning Inspired by Humans' Learning Skills

Humans, as the most powerful learners on the planet, have accumulated a ...

Please sign up or login with your details

Forgot password? Click here to reset