Learning Invariances with Generalised Input-Convex Neural Networks

04/14/2022
by   Vitali Nesterov, et al.
0

Considering smooth mappings from input vectors to continuous targets, our goal is to characterise subspaces of the input domain, which are invariant under such mappings. Thus, we want to characterise manifolds implicitly defined by level sets. Specifically, this characterisation should be of a global parametric form, which is especially useful for different informed data exploration tasks, such as building grid-based approximations, sampling points along the level curves, or finding trajectories on the manifold. However, global parameterisations can only exist if the level sets are connected. For this purpose, we introduce a novel and flexible class of neural networks that generalise input-convex networks. These networks represent functions that are guaranteed to have connected level sets forming smooth manifolds on the input space. We further show that global parameterisations of these level sets can be always found efficiently. Lastly, we demonstrate that our novel technique for characterising invariances is a powerful generative data exploration tool in real-world applications, such as computational chemistry.

READ FULL TEXT

page 4

page 7

research
12/23/2011

Learning Smooth Pattern Transformation Manifolds

Manifold models provide low-dimensional representations that are useful ...
research
12/23/2022

Hermite interpolation with retractions on manifolds

Interpolation of data on non-Euclidean spaces is an active research area...
research
03/30/2020

Detecting Symmetries with Neural Networks

Identifying symmetries in data sets is generally difficult, but knowledg...
research
05/26/2023

Generalizing Adam To Manifolds For Efficiently Training Transformers

One of the primary reasons behind the success of neural networks has bee...
research
04/09/2020

Spectral Discovery of Jointly Smooth Features for Multimodal Data

In this paper, we propose a spectral method for deriving functions that ...
research
03/11/2021

For Manifold Learning, Deep Neural Networks can be Locality Sensitive Hash Functions

It is well established that training deep neural networks gives useful r...

Please sign up or login with your details

Forgot password? Click here to reset