Improving Unsupervised Defect Segmentation by Applying Structural Similarity to Autoencoders

07/05/2018
by   Paul Bergmann, et al.
0

Convolutional autoencoders have emerged as popular models for unsupervised defect segmentation on image data. Most commonly, this task is performed by thresholding a pixel-wise reconstruction error based on an ℓ^p distance. However, this procedure generally leads to high novelty scores whenever the reconstruction encompasses slight localization inaccuracies around edges. We show that this problem prevents these approaches from being applied to complex real-world scenarios and that it cannot be easily avoided by employing more elaborate architectures. Instead, we propose to use a perceptual loss function based on structural similarity. Our approach achieves state-of-the-art performance on a real-world dataset of nanofibrous materials, while being trained end-to-end without requiring additional priors such as pretrained networks or handcrafted features.

READ FULL TEXT

page 2

page 6

page 8

research
01/10/2020

Improving Image Autoencoder Embeddings with Perceptual Loss

Autoencoders are commonly trained using element-wise loss. However, elem...
research
03/16/2020

Pretraining Image Encoders without Reconstruction via Feature Prediction Loss

This work investigates three different loss functions for autoencoder-ba...
research
09/23/2019

Object Segmentation using Pixel-wise Adversarial Loss

Recent deep learning based approaches have shown remarkable success on o...
research
10/19/2019

Correlation Maximized Structural Similarity Loss for Semantic Segmentation

Most semantic segmentation models treat semantic segmentation as a pixel...
research
11/16/2016

Deep Variational Inference Without Pixel-Wise Reconstruction

Variational autoencoders (VAEs), that are built upon deep neural network...
research
10/15/2018

Supervised COSMOS Autoencoder: Learning Beyond the Euclidean Loss!

Autoencoders are unsupervised deep learning models used for learning rep...
research
01/20/2022

WPPNets: Unsupervised CNN Training with Wasserstein Patch Priors for Image Superresolution

We introduce WPPNets, which are CNNs trained by a new unsupervised loss ...

Please sign up or login with your details

Forgot password? Click here to reset