TSXplain: Demystification of DNN Decisions for Time-Series using Natural Language and Statistical Features

05/15/2019
by   Mohsin Munir, et al.
0

Neural networks (NN) are considered as black-boxes due to the lack of explainability and transparency of their decisions. This significantly hampers their deployment in environments where explainability is essential along with the accuracy of the system. Recently, significant efforts have been made for the interpretability of these deep networks with the aim to open up the black-box. However, most of these approaches are specifically developed for visual modalities. In addition, the interpretations provided by these systems require expert knowledge and understanding for intelligibility. This indicates a vital gap between the explainability provided by the systems and the novice user. To bridge this gap, we present a novel framework i.e. Time-Series eXplanation (TSXplain) system which produces a natural language based explanation of the decision taken by a NN. It uses the extracted statistical features to describe the decision of a NN, merging the deep learning world with that of statistics. The two-level explanation provides ample description of the decision made by the network to aid an expert as well as a novice user alike. Our survey and reliability assessment test confirm that the generated explanations are meaningful and correct. We believe that generating natural language based descriptions of the network's decisions is a big step towards opening up the black-box.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/06/2018

A Survey Of Methods For Explaining Black Box Models

In the last years many accurate decision support systems have been const...
research
06/26/2018

Open the Black Box Data-Driven Explanation of Black Box Decision Systems

Black box systems for automated decision making, often based on machine ...
research
06/12/2021

Explaining the Deep Natural Language Processing by Mining Textual Interpretable Features

Despite the high accuracy offered by state-of-the-art deep natural-langu...
research
03/21/2023

Do intermediate feature coalitions aid explainability of black-box models?

This work introduces the notion of intermediate concepts based on levels...
research
09/16/2021

Explainability Requires Interactivity

When explaining the decisions of deep neural networks, simple stories ar...
research
10/16/2021

TorchEsegeta: Framework for Interpretability and Explainability of Image-based Deep Learning Models

Clinicians are often very sceptical about applying automatic image proce...
research
02/03/2022

Rethinking Explainability as a Dialogue: A Practitioner's Perspective

As practitioners increasingly deploy machine learning models in critical...

Please sign up or login with your details

Forgot password? Click here to reset