Should we Reload Time Series Classification Performance Evaluation ? (a position paper)

03/08/2019
by   Dominique Gay, et al.
0

Since the introduction and the public availability of the ucr time series benchmark data sets, numerous Time Series Classification (TSC) methods has been designed, evaluated and compared to each others. We suggest a critical view of TSC performance evaluation protocols put in place in recent TSC literature. The main goal of this `position' paper is to stimulate discussion and reflexion about performance evaluation in TSC literature.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/02/2023

Automatic Feature Engineering for Time Series Classification: Evaluation and Discussion

Time Series Classification (TSC) has received much attention in the past...
research
12/09/2021

A Note on Comparison of F-measures

We comment on a recent TKDE paper "Linear Approximation of F-measure for...
research
01/17/2019

Proposition of an implementation framework enabling benchmarking of Holonic Manufacturing Systems

Performing an overview of the benchmarking initiatives oriented towards ...
research
07/19/2022

A Convolutional Neural Network Approach to Supernova Time-Series Classification

One of the brightest objects in the universe, supernovae (SNe) are power...
research
04/30/2019

A Performance Evaluation Tool for Drone Communications in 4G Cellular Networks

We introduce a measurement tool for performance evaluation of wireless c...
research
08/04/2017

DoKnowMe: Towards a Domain Knowledge-driven Methodology for Performance Evaluation

Software engineering considers performance evaluation to be one of the k...
research
07/31/2019

MetricsVis: A Visual Analytics System for Evaluating Employee Performance in Public Safety Agencies

Evaluating employee performance in organizations with varying workloads ...

Please sign up or login with your details

Forgot password? Click here to reset