Novelty Detection via Robust Variational Autoencoding

06/09/2020
by   Chieh-Hsin Lai, et al.
0

We propose a new method for novelty detection that can tolerate nontrivial corruption of the training points. Previous works assumed either no or very low corruption. Our method trains a robust variational autoencoder (VAE), which aims to generate a model for the uncorrupted training points. To gain robustness to corruption, we incorporate three changes to the common VAE: 1. Modeling the latent distribution as a mixture of Gaussian inliers and outliers, while using only the inlier component when testing; 2. Applying the Wasserstein-1 metric for regularization, instead of Kullback-Leibler divergence; and 3. Using a least absolute deviation error for reconstruction, which is equivalent to assuming a heavy-tailed likelihood. We illustrate state-of-the-art results on standard benchmark datasets for novelty detection.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset