Learning and Evaluating Representations for Deep One-class Classification

11/04/2020
by   Kihyuk Sohn, et al.
0

We present a two-stage framework for deep one-class classification. We first learn self-supervised representations from one-class data, and then build one-class classifiers on learned representations. The framework not only allows to learn better representations, but also permits building one-class classifiers that are faithful to the target task. In particular, we present a novel distribution-augmented contrastive learning that extends training distributions via data augmentation to obstruct the uniformity of contrastive representations. Moreover, we argue that classifiers inspired by the statistical perspective in generative or discriminative models are more effective than existing approaches, such as an average of normality scores from a surrogate classifier. In experiments, we demonstrate state-of-the-art performance on visual domain one-class classification benchmarks. Finally, we present visual explanations, confirming that the decision-making process of our deep one-class classifier is intuitive to humans. The code is available at: https://github.com/google-research/google-research/tree/master/deep_representation_one_class.

READ FULL TEXT

page 9

page 21

page 22

page 23

page 24

research
03/25/2022

Chaos is a Ladder: A New Theoretical Understanding of Contrastive Learning via Augmentation Overlap

Recently, contrastive learning has risen to be a promising approach for ...
research
07/20/2021

ReSSL: Relational Self-Supervised Learning with Weak Augmentation

Self-supervised Learning (SSL) including the mainstream contrastive lear...
research
06/27/2022

Lesion-Aware Contrastive Representation Learning for Histopathology Whole Slide Images Analysis

Local representation learning has been a key challenge to promote the pe...
research
11/01/2021

A Unified View of cGANs with and without Classifiers

Conditional Generative Adversarial Networks (cGANs) are implicit generat...
research
11/13/2022

Language Model Classifier Aligns Better with Physician Word Sensitivity than XGBoost on Readmission Prediction

Traditional evaluation metrics for classification in natural language pr...
research
10/17/2022

Visual Debates

The natural way of obtaining different perspectives on any given topic i...
research
10/18/2022

Helpful Neighbors: Leveraging Neighbors in Geographic Feature Pronunciation

If one sees the place name Houston Mercer Dog Run in New York, how does ...

Please sign up or login with your details

Forgot password? Click here to reset