DeepAI AI Chat
Log In Sign Up

Should we Reload Time Series Classification Performance Evaluation ? (a position paper)

by   Dominique Gay, et al.

Since the introduction and the public availability of the ucr time series benchmark data sets, numerous Time Series Classification (TSC) methods has been designed, evaluated and compared to each others. We suggest a critical view of TSC performance evaluation protocols put in place in recent TSC literature. The main goal of this `position' paper is to stimulate discussion and reflexion about performance evaluation in TSC literature.


page 1

page 2

page 3

page 4


GRATIS: GeneRAting TIme Series with diverse and controllable characteristics

The explosion of time series data in recent years has brought a flourish...

A Note on Comparison of F-measures

We comment on a recent TKDE paper "Linear Approximation of F-measure for...

Proposition of an implementation framework enabling benchmarking of Holonic Manufacturing Systems

Performing an overview of the benchmarking initiatives oriented towards ...

A Convolutional Neural Network Approach to Supernova Time-Series Classification

One of the brightest objects in the universe, supernovae (SNe) are power...

A Performance Evaluation Tool for Drone Communications in 4G Cellular Networks

We introduce a measurement tool for performance evaluation of wireless c...

DoKnowMe: Towards a Domain Knowledge-driven Methodology for Performance Evaluation

Software engineering considers performance evaluation to be one of the k...

Towards Comparability in Non-Intrusive Load Monitoring: On Data and Performance Evaluation

Non-Intrusive Load Monitoring (NILM) comprises of a set of techniques th...