Hierarchical Sparse Variational Autoencoder for Text Encoding

09/25/2020
by   Victor Prokhorov, et al.
0

In this paper we focus on unsupervised representation learning and propose a novel framework, Hierarchical Sparse Variational Autoencoder (HSVAE), that imposes sparsity on sentence representations via direct optimisation of Evidence Lower Bound (ELBO). Our experimental results illustrate that HSVAE is flexible and adapts nicely to the underlying characteristics of the corpus which is reflected by the level of sparsity and its distributional patterns.

READ FULL TEXT
research
03/26/2020

A lower bound for the ELBO of the Bernoulli Variational Autoencoder

We consider a variational autoencoder (VAE) for binary data. Our main in...
research
06/20/2018

InfoCatVAE: Representation Learning with Categorical Variational Autoencoders

This paper describes InfoCatVAE, an extension of the variational autoenc...
research
03/18/2019

M^2VAE - Derivation of a Multi-Modal Variational Autoencoder Objective from the Marginal Joint Log-Likelihood

This work gives an in-depth derivation of the trainable evidence lower b...
research
09/22/2022

FusionVAE: A Deep Hierarchical Variational Autoencoder for RGB Image Fusion

Sensor fusion can significantly improve the performance of many computer...
research
12/17/2019

SGVAE: Sequential Graph Variational Autoencoder

Generative models of graphs are well-known, but many existing models are...
research
06/15/2020

Evidence-Aware Inferential Text Generation with Vector Quantised Variational AutoEncoder

Generating inferential texts about an event in different perspectives re...
research
01/01/2023

eVAE: Evolutionary Variational Autoencoder

The surrogate loss of variational autoencoders (VAEs) poses various chal...

Please sign up or login with your details

Forgot password? Click here to reset