SC-VAE: Sparse Coding-based Variational Autoencoder

03/29/2023
by   Pan Xiao, et al.
0

Learning rich data representations from unlabeled data is a key challenge towards applying deep learning algorithms in downstream supervised tasks. Several variants of variational autoencoders have been proposed to learn compact data representaitons by encoding high-dimensional data in a lower dimensional space. Two main classes of VAEs methods may be distinguished depending on the characteristics of the meta-priors that are enforced in the representation learning step. The first class of methods derives a continuous encoding by assuming a static prior distribution in the latent space. The second class of methods learns instead a discrete latent representation using vector quantization (VQ) along with a codebook. However, both classes of methods suffer from certain challenges, which may lead to suboptimal image reconstruction results. The first class of methods suffers from posterior collapse, whereas the second class of methods suffers from codebook collapse. To address these challenges, we introduce a new VAE variant, termed SC-VAE (sparse coding-based VAE), which integrates sparse coding within variational autoencoder framework. Instead of learning a continuous or discrete latent representation, the proposed method learns a sparse data representation that consists of a linear combination of a small number of learned atoms. The sparse coding problem is solved using a learnable version of the iterative shrinkage thresholding algorithm (ISTA). Experiments on two image datasets demonstrate that our model can achieve improved image reconstruction results compared to state-of-the-art methods. Moreover, the use of learned sparse code vectors allows us to perform downstream task like coarse image segmentation through clustering image patches.

READ FULL TEXT

page 7

page 8

page 10

page 11

page 12

page 13

research
05/20/2016

Stick-Breaking Variational Autoencoders

We extend Stochastic Gradient Variational Bayes to perform posterior inf...
research
11/08/2016

Variational Lossy Autoencoder

Representation learning seeks to expose certain aspects of observed data...
research
05/07/2022

Variational Sparse Coding with Learned Thresholding

Sparse coding strategies have been lauded for their parsimonious represe...
research
11/25/2017

Learning Less-Overlapping Representations

In representation learning (RL), how to make the learned representations...
research
03/30/2020

AriEL: volume coding for sentence generation

Mapping sequences of discrete data to a point in a continuous space make...
research
01/23/2021

Improved Training of Sparse Coding Variational Autoencoder via Weight Normalization

Learning a generative model of visual information with sparse and compos...
research
07/20/2020

It's LeVAsa not LevioSA! Latent Encodings for Valence-Arousal Structure Alignment

In recent years, great strides have been made in the field of affective ...

Please sign up or login with your details

Forgot password? Click here to reset