Explainable multi-class anomaly detection on functional data

05/03/2022
by   Mathieu Cura, et al.
0

In this paper we describe an approach for anomaly detection and its explainability in multivariate functional data. The anomaly detection procedure consists of transforming the series into a vector of features and using an Isolation forest algorithm. The explainable procedure is based on the computation of the SHAP coefficients and on the use of a supervised decision tree. We apply it on simulated data to measure the performance of our method and on real data coming from industry.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/13/2022

A Survey on Explainable Anomaly Detection

In the past two decades, most research on anomaly detection has focused ...
research
07/01/2013

Syntactic sensitive complexity for symbol-free sequence

This work uses the L-system to construct a tree structure for the text s...
research
11/21/2019

Rule Extraction in Unsupervised Anomaly Detection for Model Explainability: Application to OneClass SVM

OneClass SVM is a popular method for unsupervised anomaly detection. As ...
research
06/18/2020

The Clever Hans Effect in Anomaly Detection

The 'Clever Hans' effect occurs when the learned model produces correct ...
research
07/23/2022

A general-purpose method for applying Explainable AI for Anomaly Detection

The need for explainable AI (XAI) is well established but relatively lit...
research
07/11/2022

Stochastic Functional Analysis and Multilevel Vector Field Anomaly Detection

Massive vector field datasets are common in multi-spectral optical and r...
research
04/09/2019

Functional Isolation Forest

For the purpose of monitoring the behavior of complex infrastructures (e...

Please sign up or login with your details

Forgot password? Click here to reset