On the convergence of group-sparse autoencoders

02/13/2021
by   Emmanouil Theodosis, et al.
8

Recent approaches in the theoretical analysis of model-based deep learning architectures have studied the convergence of gradient descent in shallow ReLU networks that arise from generative models whose hidden layers are sparse. Motivated by the success of architectures that impose structured forms of sparsity, we introduce and study a group-sparse autoencoder that accounts for a variety of generative models, and utilizes a group-sparse ReLU activation function to force the non-zero units at a given layer to occur in blocks. For clustering models, inputs that result in the same group of active units belong to the same cluster. We proceed to analyze the gradient dynamics of a shallow instance of the proposed autoencoder, trained with data adhering to a group-sparse generative model. In this setting, we theoretically prove the convergence of the network parameters to a neighborhood of the generating matrix. We validate our model through numerical analysis and highlight the superior performance of networks with a group-sparse ReLU compared to networks that utilize traditional ReLUs, both in sparse coding and in parameter recovery tasks. We also provide real data experiments to corroborate the simulated results, and emphasize the clustering capabilities of structured sparsity models.

READ FULL TEXT
research
06/02/2018

Autoencoders Learn Generative Linear Models

Recent progress in learning theory has led to the emergence of provable ...
research
05/08/2023

TaLU: A Hybrid Activation Function Combining Tanh and Rectified Linear Unit to Enhance Neural Networks

The application of the deep learning model in classification plays an im...
research
03/12/2018

Representation Learning and Recovery in the ReLU Model

Rectified linear units, or ReLUs, have become the preferred activation f...
research
07/22/2021

Local SGD Optimizes Overparameterized Neural Networks in Polynomial Time

In this paper we prove that Local (S)GD (or FedAvg) can optimize two-lay...
research
08/12/2017

Sparse Coding and Autoencoders

In "Dictionary Learning" one tries to recover incoherent matrices A^* ∈R...
research
10/13/2021

Clustering-Based Interpretation of Deep ReLU Network

Amongst others, the adoption of Rectified Linear Units (ReLUs) is regard...
research
02/18/2023

Closed-Loop Transcription via Convolutional Sparse Coding

Autoencoding has achieved great empirical success as a framework for lea...

Please sign up or login with your details

Forgot password? Click here to reset