A Performance-Explainability Framework to Benchmark Machine Learning Methods: Application to Multivariate Time Series Classifiers

05/29/2020
by   Kevin Fauvel, et al.
0

Our research aims to propose a new performance-explainability analytical framework to assess and benchmark machine learning methods. The framework details a set of characteristics that operationalize the performance-explainability assessment of existing machine learning methods. In order to illustrate the use of the framework, we apply it to benchmark the current state-of-the-art multivariate time series classifiers.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/10/2020

XCM: An Explainable Convolutional Neural Network for Multivariate Time Series Classification

We present XCM, an eXplainable Convolutional neural network for Multivar...
research
03/10/2023

What is the state of the art? Accounting for multiplicity in machine learning benchmark performance

Machine learning methods are commonly evaluated and compared by their pe...
research
08/25/2020

Counterfactual Explanations for Machine Learning on Multivariate Time Series Data

Applying machine learning (ML) on multivariate time series data has grow...
research
03/24/2020

Towards Explainability of Machine Learning Models in Insurance Pricing

Machine learning methods have garnered increasing interest among actuari...
research
08/24/2021

Energy time series forecasting-Analytical and empirical assessment of conventional and machine learning models

Machine learning methods have been adopted in the literature as contende...
research
05/07/2020

Local Cascade Ensemble for Multivariate Data Classification

We present LCE, a Local Cascade Ensemble for traditional (tabular) multi...
research
11/16/2021

HiRID-ICU-Benchmark – A Comprehensive Machine Learning Benchmark on High-resolution ICU Data

The recent success of machine learning methods applied to time series co...

Please sign up or login with your details

Forgot password? Click here to reset