Representation Learning via Manifold Flattening and Reconstruction

05/02/2023
by   Michael Psenka, et al.
0

This work proposes an algorithm for explicitly constructing a pair of neural networks that linearize and reconstruct an embedded submanifold, from finite samples of this manifold. Our such-generated neural networks, called flattening networks (FlatNet), are theoretically interpretable, computationally feasible at scale, and generalize well to test data, a balance not typically found in manifold-based learning methods. We present empirical results and comparisons to other models on synthetic high-dimensional manifold data and 2D image data. Our code is publicly available.

READ FULL TEXT

page 27

page 28

page 29

page 31

page 32

research
09/18/2021

Manifold-preserved GANs

Generative Adversarial Networks (GANs) have been widely adopted in vario...
research
05/07/2021

Kernel MMD Two-Sample Tests for Manifold Data

We present a study of kernel MMD two-sample test statistics in the manif...
research
03/03/2020

Compact Surjective Encoding Autoencoder for Unsupervised Novelty Detection

In unsupervised novelty detection, a model is trained solely on the in-c...
research
02/01/2023

Dictionary-based Manifold Learning

We propose a paradigm for interpretable Manifold Learning for scientific...
research
10/15/2019

Shapley Homology: Topological Analysis of Sample Influence for Neural Networks

Data samples collected for training machine learning models are typicall...
research
09/16/2019

Hierarchic Neighbors Embedding

Manifold learning now plays a very important role in machine learning an...
research
12/27/2018

Topological Constraints on Homeomorphic Auto-Encoding

When doing representation learning on data that lives on a known non-tri...

Please sign up or login with your details

Forgot password? Click here to reset