Natural representation of composite data with replicated autoencoders

09/29/2019
by   Matteo Negri, et al.
0

Generative processes in biology and other fields often produce data that can be regarded as resulting from a composition of basic features. Here we present an unsupervised method based on autoencoders for inferring these basic features of data. The main novelty in our approach is that the training is based on the optimization of the `local entropy' rather than the standard loss, resulting in a more robust inference, and enhancing the performance on this type of data considerably. Algorithmically, this is realized by training an interacting system of replicated autoencoders. We apply this method to synthetic and protein sequence data, and show that it is able to infer a hidden representation that correlates well with the underlying generative process, without requiring any prior knowledge.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/07/2019

Stacked autoencoders based machine learning for noise reduction and signal reconstruction in geophysical data

Autoencoders are neural network formulations where the input and output ...
research
06/24/2021

Symmetric Wasserstein Autoencoders

Leveraging the framework of Optimal Transport, we introduce a new family...
research
02/06/2019

Regularizing Generative Models Using Knowledge of Feature Dependence

Generative modeling is a fundamental problem in machine learning with ma...
research
02/20/2022

Disentangling Autoencoders (DAE)

Noting the importance of factorizing or disentangling the latent space, ...
research
03/12/2020

ARAE: Adversarially Robust Training of Autoencoders Improves Novelty Detection

Autoencoders (AE) have recently been widely employed to approach the nov...
research
06/02/2023

Generative Autoencoders as Watermark Attackers: Analyses of Vulnerabilities and Threats

Invisible watermarks safeguard images' copyrights by embedding hidden me...

Please sign up or login with your details

Forgot password? Click here to reset