Time to Focus: A Comprehensive Benchmark Using Time Series Attribution Methods

02/08/2022
by   Dominique Mercier, et al.
0

In the last decade neural network have made huge impact both in industry and research due to their ability to extract meaningful features from imprecise or complex data, and by achieving super human performance in several domains. However, due to the lack of transparency the use of these networks is hampered in the areas with safety critical areas. In safety-critical areas, this is necessary by law. Recently several methods have been proposed to uncover this black box by providing interpreation of predictions made by these models. The paper focuses on time series analysis and benchmark several state-of-the-art attribution methods which compute explanations for convolutional classifiers. The presented experiments involve gradient-based and perturbation-based attribution methods. A detailed analysis shows that perturbation-based approaches are superior concerning the Sensitivity and occlusion game. These methods tend to produce explanations with higher continuity. Contrarily, the gradient-based techniques are superb in runtime and Infidelity. In addition, a validation the dependence of the methods on the trained model, feasible application domains, and individual characteristics is attached. The findings accentuate that choosing the best-suited attribution method is strongly correlated with the desired use case. Neither category of attribution methods nor a single approach has shown outstanding performance across all aspects.

READ FULL TEXT

page 7

page 8

page 9

research
12/08/2020

An Empirical Study of Explainable AI Techniques on Deep Learning Models For Time Series Tasks

Decision explanations of machine learning black-box models are often gen...
research
02/16/2022

TimeREISE: Time-series Randomized Evolving Input Sample Explanation

Deep neural networks are one of the most successful classifiers across d...
research
11/08/2022

Privacy Meets Explainability: A Comprehensive Impact Benchmark

Since the mid-10s, the era of Deep Learning (DL) has continued to this d...
research
04/28/2023

Evaluating the Stability of Semantic Concept Representations in CNNs for Robust Explainability

Analysis of how semantic concepts are represented within Convolutional N...
research
05/01/2020

A Comprehensive Study on Visual Explanations for Spatio-temporal Networks

Identifying and visualizing regions that are significant for a given dee...
research
05/04/2023

Distributing Synergy Functions: Unifying Game-Theoretic Interaction Methods for Machine-Learning Explainability

Deep learning has revolutionized many areas of machine learning, from co...
research
04/07/2021

Information Bottleneck Attribution for Visual Explanations of Diagnosis and Prognosis

Visual explanation methods have an important role in the prognosis of th...

Please sign up or login with your details

Forgot password? Click here to reset