A new interpretable unsupervised anomaly detection method based on residual explanation

03/14/2021
by   David F. N. Oliveira, et al.
0

Despite the superior performance in modeling complex patterns to address challenging problems, the black-box nature of Deep Learning (DL) methods impose limitations to their application in real-world critical domains. The lack of a smooth manner for enabling human reasoning about the black-box decisions hinder any preventive action to unexpected events, in which may lead to catastrophic consequences. To tackle the unclearness from black-box models, interpretability became a fundamental requirement in DL-based systems, leveraging trust and knowledge by providing ways to understand the model's behavior. Although a current hot topic, further advances are still needed to overcome the existing limitations of the current interpretability methods in unsupervised DL-based models for Anomaly Detection (AD). Autoencoders (AE) are the core of unsupervised DL-based for AD applications, achieving best-in-class performance. However, due to their hybrid aspect to obtain the results (by requiring additional calculations out of network), only agnostic interpretable methods can be applied to AE-based AD. These agnostic methods are computationally expensive to process a large number of parameters. In this paper we present the RXP (Residual eXPlainer), a new interpretability method to deal with the limitations for AE-based AD in large-scale systems. It stands out for its implementation simplicity, low computational cost and deterministic behavior, in which explanations are obtained through the deviation analysis of reconstructed input features. In an experiment using data from a real heavy-haul railway line, the proposed method achieved superior performance compared to SHAP, demonstrating its potential to support decision making in large scale critical systems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/23/2021

DeepAID: Interpreting and Improving Deep Learning-based Anomaly Detection in Security Applications

Unsupervised Deep Learning (DL) techniques have been widely used in vari...
research
07/04/2023

Prototypes as Explanation for Time Series Anomaly Detection

Detecting abnormal patterns that deviate from a certain regular repeatin...
research
05/15/2023

Topological Interpretability for Deep-Learning

With the increasing adoption of AI-based systems across everyday life, t...
research
10/11/2021

Development and testing of an image transformer for explainable autonomous driving systems

In the last decade, deep learning (DL) approaches have been used success...
research
03/13/2022

Algebraic Learning: Towards Interpretable Information Modeling

Along with the proliferation of digital data collected using sensor tech...
research
04/08/2023

Deep Prototypical-Parts Ease Morphological Kidney Stone Identification and are Competitively Robust to Photometric Perturbations

Identifying the type of kidney stones can allow urologists to determine ...

Please sign up or login with your details

Forgot password? Click here to reset