OCFormer: One-Class Transformer Network for Image Classification

04/25/2022
by   Prerana Mukherjee, et al.
0

We propose a novel deep learning framework based on Vision Transformers (ViT) for one-class classification. The core idea is to use zero-centered Gaussian noise as a pseudo-negative class for latent space representation and then train the network using the optimal loss function. In prior works, there have been tremendous efforts to learn a good representation using varieties of loss functions, which ensures both discriminative and compact properties. The proposed one-class Vision Transformer (OCFormer) is exhaustively experimented on CIFAR-10, CIFAR-100, Fashion-MNIST and CelebA eyeglasses datasets. Our method has shown significant improvements over competing CNN based one-class classifier approaches.

READ FULL TEXT

page 3

page 5

research
01/24/2019

One-Class Convolutional Neural Network

We present a novel Convolutional Neural Network (CNN) based approach for...
research
04/12/2021

Intra-Class Uncertainty Loss Function for Classification

Most classification models can be considered as the process of matching ...
research
02/05/2021

The Fourier Loss Function

This paper introduces a new loss function induced by the Fourier-based M...
research
03/04/2019

Active Authentication using an Autoencoder regularized CNN-based One-Class Classifier

Active authentication refers to the process in which users are unobtrusi...
research
04/15/2016

Latent Model Ensemble with Auto-localization

Deep Convolutional Neural Networks (CNN) have exhibited superior perform...
research
11/04/2016

Semantic Noise Modeling for Better Representation Learning

Latent representation learned from multi-layered neural networks via hie...
research
11/18/2013

From Maxout to Channel-Out: Encoding Information on Sparse Pathways

Motivated by an important insight from neural science, we propose a new ...

Please sign up or login with your details

Forgot password? Click here to reset