Sparse Penalty in Deep Belief Networks: Using the Mixed Norm Constraint

01/16/2013
by   Xanadu Halkias, et al.
0

Deep Belief Networks (DBN) have been successfully applied on popular machine learning tasks. Specifically, when applied on hand-written digit recognition, DBNs have achieved approximate accuracy rates of 98.8 optimize the data representation achieved by the DBN and maximize their descriptive power, recent advances have focused on inducing sparse constraints at each layer of the DBN. In this paper we present a theoretical approach for sparse constraints in the DBN using the mixed norm for both non-overlapping and overlapping groups. We explore how these constraints affect the classification accuracy for digit recognition in three different datasets (MNIST, USPS, RIMES) and provide initial estimations of their usefulness by altering different parameters such as the group size and overlap percentage.

READ FULL TEXT

page 5

page 6

research
09/11/2015

Learning Sparse Feature Representations using Probabilistic Quadtrees and Deep Belief Nets

Learning sparse feature representations is a useful instrument for solvi...
research
11/16/2022

The non-overlapping statistical approximation to overlapping group lasso

Group lasso is a commonly used regularization method in statistical lear...
research
02/23/2016

A survey of sparse representation: algorithms and applications

Sparse representation has attracted much attention from researchers in f...
research
04/13/2017

ApproxDBN: Approximate Computing for Discriminative Deep Belief Networks

Probabilistic generative neural networks are useful for many application...
research
07/28/2018

Group-sparse SVD Models and Their Applications in Biological Data

Sparse Singular Value Decomposition (SVD) models have been proposed for ...
research
03/16/2013

l_2,p Matrix Norm and Its Application in Feature Selection

Recently, l_2,1 matrix norm has been widely applied to many areas such a...

Please sign up or login with your details

Forgot password? Click here to reset