Sparse, Geometric Autoencoder Models of V1

02/22/2023
by   Jonathan Huml, et al.
0

The classical sparse coding model represents visual stimuli as a linear combination of a handful of learned basis functions that are Gabor-like when trained on natural image data. However, the Gabor-like filters learned by classical sparse coding far overpredict well-tuned simple cell receptive field (SCRF) profiles. A number of subsequent models have either discarded the sparse dictionary learning framework entirely or have yet to take advantage of the surge in unrolled, neural dictionary learning architectures. A key missing theme of these updates is a stronger notion of structured sparsity. We propose an autoencoder architecture whose latent representations are implicitly, locally organized for spectral clustering, which begets artificial neurons better matched to observed primate data. The weighted-ℓ_1 (WL) constraint in the autoencoder objective function maintains core ideas of the sparse coding framework, yet also offers a promising path to describe the differentiation of receptive fields in terms of a discriminative hierarchy in future work.

READ FULL TEXT
research
09/12/2011

A Probabilistic Framework for Discriminative Dictionary Learning

In this paper, we address the problem of discriminative dictionary learn...
research
01/23/2021

Improved Training of Sparse Coding Variational Autoencoder via Weight Normalization

Learning a generative model of visual information with sparse and compos...
research
11/24/2020

The Interpretable Dictionary in Sparse Coding

Artificial neural networks (ANNs), specifically deep learning networks, ...
research
08/01/2013

Sparse Dictionary-based Attributes for Action Recognition and Summarization

We present an approach for dictionary learning of action attributes via ...
research
09/09/2017

Convolutional Dictionary Learning

Convolutional sparse representations are a form of sparse representation...
research
06/17/2014

Replicating Kernels with a Short Stride Allows Sparse Reconstructions with Fewer Independent Kernels

In sparse coding it is common to tile an image into nonoverlapping patch...
research
01/05/2023

Competitive learning to generate sparse representations for associative memory

One of the most well established brain principles, hebbian learning, has...

Please sign up or login with your details

Forgot password? Click here to reset