Learning Sparse Feature Representations using Probabilistic Quadtrees and Deep Belief Nets

09/11/2015
by   Saikat Basu, et al.
0

Learning sparse feature representations is a useful instrument for solving an unsupervised learning problem. In this paper, we present three labeled handwritten digit datasets, collectively called n-MNIST. Then, we propose a novel framework for the classification of handwritten digits that learns sparse representations using probabilistic quadtrees and Deep Belief Nets. On the MNIST and n-MNIST datasets, our framework shows promising results and significantly outperforms traditional Deep Belief Networks.

READ FULL TEXT
research
04/08/2020

MNIST-MIX: A Multi-language Handwritten Digit Recognition Dataset

In this letter, we contribute a multi-language handwritten digit recogni...
research
01/16/2013

Sparse Penalty in Deep Belief Networks: Using the Mixed Norm Constraint

Deep Belief Networks (DBN) have been successfully applied on popular mac...
research
09/21/2017

Large Vocabulary Automatic Chord Estimation Using Deep Neural Nets: Design Framework, System Variations and Limitations

In this paper, we propose a new system design framework for large vocabu...
research
03/23/2011

Handwritten Digit Recognition with a Committee of Deep Neural Nets on GPUs

The competitive MNIST handwritten digit recognition benchmark has a long...
research
04/27/2022

An Improved Nearest Neighbour Classifier

A windowed version of the Nearest Neighbour (WNN) classifier for images ...
research
06/20/2018

DEFRAG: Deep Euclidean Feature Representations through Adaptation on the Grassmann Manifold

We propose a novel technique for training deep networks with the objecti...
research
05/11/2021

Unsupervised Representation Learning from Pathology Images with Multi-directional Contrastive Predictive Coding

Digital pathology tasks have benefited greatly from modern deep learning...

Please sign up or login with your details

Forgot password? Click here to reset