Sparsely Activated Networks

07/12/2019
by   Paschalis Bizopoulos, et al.
8

Previous literature on unsupervised learning focused on designing structural priors and optimization functions with the aim of learning meaningful features, but without considering the description length of the representations. Here we present Sparsely Activated Networks (SANs), which decompose their input as a sum of sparsely reoccurring patterns of varying amplitude, and combined with a newly proposed metric φ they learn representations with minimal description lengths. SANs consist of kernels with shared weights that during encoding are convolved with the input and then passed through a ReLU and a sparse activation function. During decoding, the same weights are convolved with the sparse activation map and the individual reconstructions from each weight are summed to reconstruct the input. We also propose a metric φ for model selection that favors models which combine high compression ratio and low reconstruction error and we justify its definition by exploring the hyperparameter space of SANs. We compare four sparse activation functions (Identity, Max-Activations, Max-Pool indices, Peaks) on a variety of datasets and show that SANs learn interpretable kernels that combined with φ, they minimize the description length of the representations.

READ FULL TEXT

page 2

page 3

research
10/30/2019

Sparsely Activated Networks: A new method for decomposing and compressing data

Recent literature on unsupervised learning focused on designing structur...
research
01/15/2022

Phish: A Novel Hyper-Optimizable Activation Function

Deep-learning models estimate values using backpropagation. The activati...
research
07/13/2022

MorphoActivation: Generalizing ReLU activation function by mathematical morphology

This paper analyses both nonlinear activation functions and spatial max-...
research
09/29/2021

Double framed moduli spaces of quiver representations

Motivated by problems in the neural networks setting, we study moduli sp...
research
05/17/2016

Combinatorially Generated Piecewise Activation Functions

In the neuroevolution literature, research has primarily focused on evol...
research
02/13/2019

Differential Description Length for Hyperparameter Selection in Machine Learning

This paper introduces a new method for model selection and more generall...
research
05/05/2016

Rank Ordered Autoencoders

A new method for the unsupervised learning of sparse representations usi...

Please sign up or login with your details

Forgot password? Click here to reset