Towards Explaining Anomalies: A Deep Taylor Decomposition of One-Class Models

05/16/2018
by   Jacob Kauffmann, et al.
0

A common machine learning task is to discriminate between normal and anomalous data points. In practice, it is not always sufficient to reach high accuracy at this task, one also would like to understand why a given data point has been predicted in a certain way. We present a new principled approach for one-class SVMs that decomposes outlier predictions in terms of input variables. The method first recomposes the one-class model as a neural network with distance functions and min-pooling, and then performs a deep Taylor decomposition (DTD) of the model output. The proposed One-Class DTD is applicable to a number of common distance-based SVM kernels and is able to reliably explain a wide set of data anomalies. Furthermore, it outperforms baselines such as sensitivity analysis, nearest neighbor, or simple edge detection.

READ FULL TEXT
research
05/16/2023

Probabilistic Distance-Based Outlier Detection

The scores of distance-based outlier detection methods are difficult to ...
research
02/16/2022

Latent Outlier Exposure for Anomaly Detection with Contaminated Data

Anomaly detection aims at identifying data points that show systematic d...
research
03/06/2019

Explaining Anomalies Detected by Autoencoders Using SHAP

Anomaly detection algorithms are often thought to be limited because the...
research
06/09/2023

WePaMaDM-Outlier Detection: Weighted Outlier Detection using Pattern Approaches for Mass Data Mining

Weighted Outlier Detection is a method for identifying unusual or anomal...
research
05/15/2022

Attack vs Benign Network Intrusion Traffic Classification

Intrusion detection systems (IDS) are used to monitor networks or system...
research
01/28/2015

A Neural Network Anomaly Detector Using the Random Cluster Model

The random cluster model is used to define an upper bound on a distance ...

Please sign up or login with your details

Forgot password? Click here to reset