Adversarial confidence and smoothness regularizations for scalable unsupervised discriminative learning

06/04/2018
by   Yi-Qing Wang, et al.
0

In this paper, we consider a generic probabilistic discriminative learner from the functional viewpoint and argue that, to make it learn well, it is necessary to constrain its hypothesis space to a set of non-trivial piecewise constant functions. To achieve this goal, we present a scalable unsupervised regularization framework. On the theoretical front, we prove that this framework is conducive to a factually confident and smooth discriminative model and connect it to an adversarial Taboo game, spectral clustering and virtual adversarial training. Experimentally, we take deep neural networks as our learners and demonstrate that, when trained under our framework in the unsupervised setting, they not only achieve state-of-the-art clustering results but also generalize well on both synthetic and real data.

READ FULL TEXT

page 5

page 6

research
07/04/2012

Unsupervised spectral learning

In spectral clustering and spectral image segmentation, the data is part...
research
12/22/2016

Learning from Simulated and Unsupervised Images through Adversarial Training

With recent progress in graphics, it has become more tractable to train ...
research
04/30/2019

Deep Spectral Clustering using Dual Autoencoder Network

The clustering methods have recently absorbed even-increasing attention ...
research
07/16/2018

Manifold Adversarial Learning

The recently proposed adversarial training methods show the robustness t...
research
05/14/2012

Unsupervised Discovery of Mid-Level Discriminative Patches

The goal of this paper is to discover a set of discriminative patches wh...
research
03/01/2017

Generating Steganographic Images via Adversarial Training

Adversarial training was recently shown to be competitive against superv...

Please sign up or login with your details

Forgot password? Click here to reset