Geometric Autoencoders – What You See is What You Decode

06/30/2023
by   Philipp Nazari, et al.
0

Visualization is a crucial step in exploratory data analysis. One possible approach is to train an autoencoder with low-dimensional latent space. Large network depth and width can help unfolding the data. However, such expressive networks can achieve low reconstruction error even when the latent representation is distorted. To avoid such misleading visualizations, we propose first a differential geometric perspective on the decoder, leading to insightful diagnostics for an embedding's distortion, and second a new regularizer mitigating such distortion. Our “Geometric Autoencoder” avoids stretching the embedding spuriously, so that the visualization captures the data structure more faithfully. It also flags areas where little distortion could not be achieved, thus guarding against misinterpretation.

READ FULL TEXT

page 7

page 12

page 14

page 23

research
09/23/2020

Generative Model without Prior Distribution Matching

Variational Autoencoder (VAE) and its variations are classic generative ...
research
08/07/2019

Structuring Autoencoders

In this paper we propose Structuring AutoEncoders (SAE). SAEs are neural...
research
12/03/2019

Mixing autoencoder with classifier: conceptual data visualization

In this short paper, a neural network that is able to form a low dimensi...
research
09/15/2023

A Geometric Perspective on Autoencoders

This paper presents the geometric aspect of the autoencoder framework, w...
research
07/07/2022

Machine Learning to Predict Aerodynamic Stall

A convolutional autoencoder is trained using a database of airfoil aerod...
research
12/21/2021

On the Size and Width of the Decoder of a Boolean Threshold Autoencoder

In this paper, we study the size and width of autoencoders consisting of...

Please sign up or login with your details

Forgot password? Click here to reset