Learning Weighted Submanifolds with Variational Autoencoders and Riemannian Variational Autoencoders

11/19/2019
by   Nina Miolane, et al.
21

Manifold-valued data naturally arises in medical imaging. In cognitive neuroscience, for instance, brain connectomes base the analysis of coactivation patterns between different brain regions on the analysis of the correlations of their functional Magnetic Resonance Imaging (fMRI) time series - an object thus constrained by construction to belong to the manifold of symmetric positive definite matrices. One of the challenges that naturally arises consists of finding a lower-dimensional subspace for representing such manifold-valued data. Traditional techniques, like principal component analysis, are ill-adapted to tackle non-Euclidean spaces and may fail to achieve a lower-dimensional representation of the data - thus potentially pointing to the absence of lower-dimensional representation of the data. However, these techniques are restricted in that: (i) they do not leverage the assumption that the connectomes belong on a pre-specified manifold, therefore discarding information; (ii) they can only fit a linear subspace to the data. In this paper, we are interested in variants to learn potentially highly curved submanifolds of manifold-valued data. Motivated by the brain connectomes example, we investigate a latent variable generative model, which has the added benefit of providing us with uncertainty estimates - a crucial quantity in the medical applications we are considering. While latent variable models have been proposed to learn linear and nonlinear spaces for Euclidean data, or geodesic subspaces for manifold data, no intrinsic latent variable model exists to learn nongeodesic subspaces for manifold data. This paper fills this gap and formulates a Riemannian variational autoencoder with an intrinsic generative model of manifold-valued data. We evaluate its performances on synthetic and real datasets by introducing the formalism of weighted Riemannian submanifolds.

READ FULL TEXT
research
05/23/2018

Probabilistic Riemannian submanifold learning with wrapped Gaussian process latent variable models

Latent variable models learn a stochastic embedding from a low-dimension...
research
01/26/2017

Riemannian-geometry-based modeling and clustering of network-wide non-stationary time series: The brain-network case

This paper advocates Riemannian multi-manifold modeling in the context o...
research
03/30/2020

ManifoldNorm: Extending normalizations on Riemannian Manifolds

Many measurements in computer vision and machine learning manifest as no...
research
09/11/2018

ManifoldNet: A Deep Network Framework for Manifold-valued Data

Deep neural networks have become the main work horse for many tasks invo...
research
12/10/2022

Graph-Regularized Manifold-Aware Conditional Wasserstein GAN for Brain Functional Connectivity Generation

Common measures of brain functional connectivity (FC) including covarian...
research
06/12/2020

Manifold GPLVMs for discovering non-Euclidean latent structure in neural data

A common problem in neuroscience is to elucidate the collective neural r...
research
10/05/2019

Dilated Convolutional Neural Networks for Sequential Manifold-valued Data

Efforts are underway to study ways via which the power of deep neural ne...

Please sign up or login with your details

Forgot password? Click here to reset