Symmetry-Aware Autoencoders: s-PCA and s-nlPCA

11/04/2021
by   Simon Kneer, et al.
0

Nonlinear principal component analysis (nlPCA) via autoencoders has attracted attention in the dynamical systems community due to its larger compression rate when compared to linear principal component analysis (PCA). These model reduction methods experience an increase in the dimensionality of the latent space when applied to datasets that exhibit globally invariant samples due to the presence of symmetries. In this study, we introduce a novel machine learning embedding in the autoencoder, which uses spatial transformer networks and Siamese networks to account for continuous and discrete symmetries, respectively. The spatial transformer network discovers the optimal shift for the continuous translation or rotation so that invariant samples are aligned in the periodic directions. Similarly, the Siamese networks collapse samples that are invariant under discrete shifts and reflections. Thus, the proposed symmetry-aware autoencoder is invariant to predetermined input transformations dictating the dynamics of the underlying physical system. This embedding can be employed with both linear and nonlinear reduction methods, which we term symmetry-aware PCA (s-PCA) and symmetry-aware nlPCA (s-nlPCA). We apply the proposed framework to 3 fluid flow problems: Burgers' equation, the simulation of the flow through a step diffuser and the Kolmogorov flow to showcase the capabilities for cases exhibiting only continuous symmetries, only discrete symmetries or a combination of both.

READ FULL TEXT
research
12/12/2017

Neural Component Analysis for Fault Detection

Principal component analysis (PCA) is largely adopted for chemical proce...
research
01/21/2023

HeMPPCAT: Mixtures of Probabilistic Principal Component Analysers for Data with Heteroscedastic Noise

Mixtures of probabilistic principal component analysis (MPPCA) is a well...
research
08/22/2018

XPCA: Extending PCA for a Combination of Discrete and Continuous Variables

Principal component analysis (PCA) is arguably the most popular tool in ...
research
05/23/2022

PCA-Boosted Autoencoders for Nonlinear Dimensionality Reduction in Low Data Regimes

Autoencoders (AE) provide a useful method for nonlinear dimensionality r...
research
12/08/2021

Learnable Faster Kernel-PCA for Nonlinear Fault Detection: Deep Autoencoder-Based Realization

Kernel principal component analysis (KPCA) is a well-recognized nonlinea...
research
11/21/2017

Autoencoder Node Saliency: Selecting Relevant Latent Representations

The autoencoder is an artificial neural network model that learns hidden...
research
05/12/2019

Rotation Invariant Householder Parameterization for Bayesian PCA

We consider probabilistic PCA and related factor models from a Bayesian ...

Please sign up or login with your details

Forgot password? Click here to reset