On the Transformation of Latent Space in Autoencoders

01/24/2019
by   Jaehoon Cha, et al.
0

Noting the importance of the latent variables in inference and learning, we propose a novel framework for autoencoders based on the homeomorphic transformation of latent variables --- which could reduce the distance between vectors in the transformed space, while preserving the topological properties of the original space --- and investigate the effect of the transformation in both learning generative models and denoising corrupted data. The results of our experiments show that the proposed model can work as both a generative model and a denoising model with improved performance due to the transformation compared to conventional variational and denoising autoencoders.

READ FULL TEXT

page 7

page 8

page 9

research
02/20/2022

Disentangling Autoencoders (DAE)

Noting the importance of factorizing or disentangling the latent space, ...
research
03/03/2017

Denoising Adversarial Autoencoders

Unsupervised learning is of growing interest because it unlocks the pote...
research
02/25/2021

Physics-Integrated Variational Autoencoders for Robust and Interpretable Generative Modeling

Integrating physics models within machine learning holds considerable pr...
research
08/31/2023

Latent Painter

Latent diffusers revolutionized the generative AI and inspired creative ...
research
07/03/2020

Deep learning of thermodynamics-aware reduced-order models from data

We present an algorithm to learn the relevant latent variables of a larg...
research
09/03/2021

Topographic VAEs learn Equivariant Capsules

In this work we seek to bridge the concepts of topographic organization ...

Please sign up or login with your details

Forgot password? Click here to reset