DeepAI AI Chat
Log In Sign Up

Symmetry-Aware Autoencoders: s-PCA and s-nlPCA

by   Simon Kneer, et al.

Nonlinear principal component analysis (nlPCA) via autoencoders has attracted attention in the dynamical systems community due to its larger compression rate when compared to linear principal component analysis (PCA). These model reduction methods experience an increase in the dimensionality of the latent space when applied to datasets that exhibit globally invariant samples due to the presence of symmetries. In this study, we introduce a novel machine learning embedding in the autoencoder, which uses spatial transformer networks and Siamese networks to account for continuous and discrete symmetries, respectively. The spatial transformer network discovers the optimal shift for the continuous translation or rotation so that invariant samples are aligned in the periodic directions. Similarly, the Siamese networks collapse samples that are invariant under discrete shifts and reflections. Thus, the proposed symmetry-aware autoencoder is invariant to predetermined input transformations dictating the dynamics of the underlying physical system. This embedding can be employed with both linear and nonlinear reduction methods, which we term symmetry-aware PCA (s-PCA) and symmetry-aware nlPCA (s-nlPCA). We apply the proposed framework to 3 fluid flow problems: Burgers' equation, the simulation of the flow through a step diffuser and the Kolmogorov flow to showcase the capabilities for cases exhibiting only continuous symmetries, only discrete symmetries or a combination of both.


Neural Component Analysis for Fault Detection

Principal component analysis (PCA) is largely adopted for chemical proce...

HeMPPCAT: Mixtures of Probabilistic Principal Component Analysers for Data with Heteroscedastic Noise

Mixtures of probabilistic principal component analysis (MPPCA) is a well...

XPCA: Extending PCA for a Combination of Discrete and Continuous Variables

Principal component analysis (PCA) is arguably the most popular tool in ...

PCA-Boosted Autoencoders for Nonlinear Dimensionality Reduction in Low Data Regimes

Autoencoders (AE) provide a useful method for nonlinear dimensionality r...

Learnable Faster Kernel-PCA for Nonlinear Fault Detection: Deep Autoencoder-Based Realization

Kernel principal component analysis (KPCA) is a well-recognized nonlinea...

Autoencoder Node Saliency: Selecting Relevant Latent Representations

The autoencoder is an artificial neural network model that learns hidden...

Rotation Invariant Householder Parameterization for Bayesian PCA

We consider probabilistic PCA and related factor models from a Bayesian ...