Winner-Take-All Autoencoders

09/09/2014
by   Alireza Makhzani, et al.
0

In this paper, we propose a winner-take-all method for learning hierarchical sparse representations in an unsupervised fashion. We first introduce fully-connected winner-take-all autoencoders which use mini-batch statistics to directly enforce a lifetime sparsity in the activations of the hidden units. We then propose the convolutional winner-take-all autoencoder which combines the benefits of convolutional architectures and autoencoders for learning shift-invariant sparse representations. We describe a way to train convolutional autoencoders layer by layer, where in addition to lifetime sparsity, a spatial sparsity within each feature map is achieved using winner-take-all activation functions. We will show that winner-take-all autoencoders can be used to to learn deep sparse representations from the MNIST, CIFAR-10, ImageNet, Street View House Numbers and Toronto Face datasets, and achieve competitive classification performance.

READ FULL TEXT

page 2

page 3

page 4

page 5

page 6

page 7

page 8

research
05/04/2017

KATE: K-Competitive Autoencoder for Text

Autoencoders have been successful in learning meaningful representations...
research
06/06/2014

Analyzing noise in autoencoders and deep networks

Autoencoders have emerged as a useful framework for unsupervised learnin...
research
12/31/2018

Soft-Autoencoder and Its Wavelet Shrinkage Interpretation

Deep learning is a main focus of artificial intelligence and has greatly...
research
05/05/2016

Rank Ordered Autoencoders

A new method for the unsupervised learning of sparse representations usi...
research
04/24/2019

S^2-LBI: Stochastic Split Linearized Bregman Iterations for Parsimonious Deep Learning

This paper proposes a novel Stochastic Split Linearized Bregman Iteratio...
research
02/13/2014

Zero-bias autoencoders and the benefits of co-adapting features

Regularized training of an autoencoder typically results in hidden unit ...
research
05/15/2020

A Novel Fusion of Attention and Sequence to Sequence Autoencoders to Predict Sleepiness From Speech

Motivated by the attention mechanism of the human visual system and rece...

Please sign up or login with your details

Forgot password? Click here to reset