Deep Invertible Approximation of Topologically Rich Maps between Manifolds

10/02/2022
βˆ™
by   Michael Puthawala, et al.
βˆ™
0
βˆ™

How can we design neural networks that allow for stable universal approximation of maps between topologically interesting manifolds? The answer is with a coordinate projection. Neural networks based on topological data analysis (TDA) use tools such as persistent homology to learn topological signatures of data and stabilize training but may not be universal approximators or have stable inverses. Other architectures universally approximate data distributions on submanifolds but only when the latter are given by a single chart, making them unable to learn maps that change topology. By exploiting the topological parallels between locally bilipschitz maps, covering spaces, and local homeomorphisms, and by using universal approximation arguments from machine learning, we find that a novel network of the form π’―βˆ˜ p βˆ˜β„°, where β„° is an injective network, p a fixed coordinate projection, and 𝒯 a bijective network, is a universal approximator of local diffeomorphisms between compact smooth submanifolds embedded in ℝ^n. We emphasize the case when the target map changes topology. Further, we find that by constraining the projection p, multivalued inversions of our networks can be computed without sacrificing universality. As an application, we show that learning a group invariant function with unknown group action naturally reduces to the question of learning local diffeomorphisms for finite groups. Our theory permits us to recover orbits of the group action. We also outline possible extensions of our architecture to address molecular imaging of molecules with symmetries. Finally, our analysis informs the choice of topologically expressive starting spaces in generative problems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
βˆ™ 10/03/2019

On Universal Approximation by Neural Networks with Uniform Guarantees on Approximation of Infinite Dimensional Maps

The study of universal approximation of arbitrary functions f: X→Y by ne...
research
βˆ™ 12/27/2020

Universal Approximation Theorem for Equivariant Maps by Group CNNs

Group symmetry is inherent in a wide variety of data distributions. Data...
research
βˆ™ 08/25/2023

A topological model for partial equivariance in deep learning and data analysis

In this article, we propose a topological model to encode partial equiva...
research
βˆ™ 09/29/2022

Equivariant maps from invariant functions

In equivariant machine learning the idea is to restrict the learning to ...
research
βˆ™ 04/26/2018

Universal approximations of invariant maps by neural networks

We describe generalizations of the universal approximation theorem for n...
research
βˆ™ 05/25/2023

Data Topology-Dependent Upper Bounds of Neural Network Widths

This paper investigates the relationship between the universal approxima...
research
βˆ™ 03/30/2020

Kernel based analysis of massive data

Dealing with massive data is a challenging task for machine learning. An...

Please sign up or login with your details

Forgot password? Click here to reset