Squeezing bottlenecks: exploring the limits of autoencoder semantic representation capabilities

02/13/2014
by   Parth Gupta, et al.
0

We present a comprehensive study on the use of autoencoders for modelling text data, in which (differently from previous studies) we focus our attention on the following issues: i) we explore the suitability of two different models bDA and rsDA for constructing deep autoencoders for text data at the sentence level; ii) we propose and evaluate two novel metrics for better assessing the text-reconstruction capabilities of autoencoders; and iii) we propose an automatic method to find the critical bottleneck dimensionality for text language representations (below which structural information is lost).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/31/2021

Sentence Bottleneck Autoencoders from Transformer Language Models

Representation learning for text via pretraining a language model on a l...
research
05/04/2017

KATE: K-Competitive Autoencoder for Text

Autoencoders have been successful in learning meaningful representations...
research
03/20/2023

Training Invertible Neural Networks as Autoencoders

Autoencoders are able to learn useful data representations in an unsuper...
research
10/13/2021

Bag-of-Vectors Autoencoders for Unsupervised Conditional Text Generation

Text autoencoders are often used for unsupervised conditional text gener...
research
03/12/2020

Autoencoders

An autoencoder is a specific type of a neural network, which is mainlyde...
research
08/09/2016

Towards cross-lingual distributed representations without parallel text trained with adversarial autoencoders

Current approaches to learning vector representations of text that are c...
research
05/08/2020

A Showcase of the Use of Autoencoders in Feature Learning Applications

Autoencoders are techniques for data representation learning based on ar...

Please sign up or login with your details

Forgot password? Click here to reset