Explaining the optimistic performance evaluation of newly proposed methods: a cross-design validation experiment

09/05/2022
by   Christina Nießl, et al.
0

The constant development of new data analysis methods in many fields of research is accompanied by an increasing awareness that these new methods often perform better in their introductory paper than in subsequent comparison studies conducted by other researchers. We attempt to explain this discrepancy by conducting a systematic experiment that we call "cross-design validation of methods". In the experiment, we select two methods designed for the same data analysis task, reproduce the results shown in each paper, and then re-evaluate each method based on the study design (i.e., data sets, competing methods, and evaluation criteria) that was used to show the abilities of the other method. We conduct the experiment for two data analysis tasks, namely cancer subtyping using multi-omic data and differential gene expression analysis. Three of the four methods included in the experiment indeed perform worse when they are evaluated on the new study design, which is mainly caused by the different data sets. Apart from illustrating the many degrees of freedom existing in the assessment of a method and their effect on its performance, our experiment suggests that the performance discrepancies between original and subsequent papers may not only be caused by the non-neutrality of the authors proposing the new method but also by differences regarding the level of expertise and field of application.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/27/2019

The Landscape of R Packages for Automated Exploratory Data Analysis

The increasing availability of large but noisy data sets with a large nu...
research
06/28/2022

Bayesian Multi-task Variable Selection with an Application to Differential DAG Analysis

In this paper, we study the Bayesian multi-task variable selection probl...
research
09/11/2018

Knowledge extraction, modeling and formalization: EEG case study

Formal Concept Analysis (FCA) is a well-established method for data anal...
research
09/05/2019

Reply to "Issues arising from benchmarking single-cell RNA sequencing imputation methods"

In our Brief Communication (DOI: 10.1038/s41592-018-0033-z), we presente...
research
09/05/2019

Using Wasserstein Generative Adversarial Networks for the Design of Monte Carlo Simulations

Researchers often use artificial data to assess the performance of new e...
research
01/31/2011

Dependency detection with similarity constraints

Unsupervised two-view learning, or detection of dependencies between two...
research
11/29/2017

A new fMRI data analysis method using cross validation: Negative BOLD responses may be the deactivations of interneurons

Although functional magnetic resonance imaging (fMRI) is widely used for...

Please sign up or login with your details

Forgot password? Click here to reset