DeepAI AI Chat
Log In Sign Up

The Importance of Suppressing Complete Reconstruction in Autoencoders for Unsupervised Outlier Detection

by   Yafei Shen, et al.
Soochow University

Autoencoders are widely used in outlier detection due to their superiority in handling high-dimensional and nonlinear datasets. The reconstruction of any dataset by the autoencoder can be considered as a complex regression process. In regression analysis, outliers can usually be divided into high leverage points and influential points. Although the autoencoder has shown good results for the identification of influential points, there are still some problems when detect high leverage points. Through theoretical derivation, we found that most outliers are detected in the direction corresponding to the worst-recovered principal component, but in the direction of the well-recovered principal components, the anomalies are often ignored. We propose a new loss function which solve the above deficiencies in outlier detection. The core idea of our scheme is that in order to better detect high leverage points, we should suppress the complete reconstruction of the dataset to convert high leverage points into influential points, and it is also necessary to ensure that the differences between the eigenvalues of the covariance matrix of the original dataset and their corresponding reconstructed results in the direction of each principal component are equal. Besides, we explain the rationality of our scheme through rigorous theoretical derivation. Finally, our experiments on multiple datasets confirm that our scheme significantly improves the accuracy of outlier detection.


A Bias Trick for Centered Robust Principal Component Analysis

Outlier based Robust Principal Component Analysis (RPCA) requires center...

Anomaly Detection by Robust Statistics

Real data often contain anomalous cases, also known as outliers. These m...

Autoencoding Under Normalization Constraints

Likelihood is a standard estimate for outlier detection. The specific ro...

Spiked separable covariance matrices and principal components

We introduce a class of separable sample covariance matrices of the form...

From Principal Subspaces to Principal Components with Linear Autoencoders

The autoencoder is an effective unsupervised learning model which is wid...

RCC-Dual-GAN: An Efficient Approach for Outlier Detection with Few Identified Anomalies

Outlier detection is an important task in data mining and many technolog...