-
Autoencoders Learn Generative Linear Models
Recent progress in learning theory has led to the emergence of provable ...
read it
-
Representation Learning and Recovery in the ReLU Model
Rectified linear units, or ReLUs, have become the preferred activation f...
read it
-
Why ReLU Units Sometimes Die: Analysis of Single-Unit Error Backpropagation in Neural Networks
Recently, neural networks in machine learning use rectified linear units...
read it
-
Sparse Coding and Autoencoders
In "Dictionary Learning" one tries to recover incoherent matrices A^* ∈R...
read it
-
Learning Model-Based Sparsity via Projected Gradient Descent
Several convex formulation methods have been proposed previously for sta...
read it
-
Information-Theoretic Lower Bounds for Compressive Sensing with Generative Models
The goal of standard compressive sensing is to estimate an unknown vecto...
read it
-
Binary autoencoder with random binary weights
Here is presented an analysis of an autoencoder with binary activations ...
read it
On the convergence of group-sparse autoencoders
Recent approaches in the theoretical analysis of model-based deep learning architectures have studied the convergence of gradient descent in shallow ReLU networks that arise from generative models whose hidden layers are sparse. Motivated by the success of architectures that impose structured forms of sparsity, we introduce and study a group-sparse autoencoder that accounts for a variety of generative models, and utilizes a group-sparse ReLU activation function to force the non-zero units at a given layer to occur in blocks. For clustering models, inputs that result in the same group of active units belong to the same cluster. We proceed to analyze the gradient dynamics of a shallow instance of the proposed autoencoder, trained with data adhering to a group-sparse generative model. In this setting, we theoretically prove the convergence of the network parameters to a neighborhood of the generating matrix. We validate our model through numerical analysis and highlight the superior performance of networks with a group-sparse ReLU compared to networks that utilize traditional ReLUs, both in sparse coding and in parameter recovery tasks. We also provide real data experiments to corroborate the simulated results, and emphasize the clustering capabilities of structured sparsity models.
READ FULL TEXT
Comments
There are no comments yet.