Reproducible, incremental representation learning with Rosetta VAE

01/13/2022
by   Miles Martinez, et al.
13

Variational autoencoders are among the most popular methods for distilling low-dimensional structure from high-dimensional data, making them increasingly valuable as tools for data exploration and scientific discovery. However, unlike typical machine learning problems in which a single model is trained once on a single large dataset, scientific workflows privilege learned features that are reproducible, portable across labs, and capable of incrementally adding new data. Ideally, methods used by different research groups should produce comparable results, even without sharing fully trained models or entire data sets. Here, we address this challenge by introducing the Rosetta VAE (R-VAE), a method of distilling previously learned representations and retraining new models to reproduce and build on prior results. The R-VAE uses post hoc clustering over the latent space of a fully-trained model to identify a small number of Rosetta Points (input, latent pairs) to serve as anchors for training future models. An adjustable hyperparameter, ρ, balances fidelity to the previously learned latent space against accommodation of new data. We demonstrate that the R-VAE reconstructs data as well as the VAE and β-VAE, outperforms both methods in recovery of a target latent space in a sequential training setting, and dramatically increases consistency of the learned representation across training runs.

READ FULL TEXT

page 8

page 11

page 12

research
11/14/2022

Disentangling Variational Autoencoders

A variational autoencoder (VAE) is a probabilistic machine learning fram...
research
12/11/2019

Variational Learning with Disentanglement-PyTorch

Unsupervised learning of disentangled representations is an open problem...
research
12/06/2022

Improving Molecule Properties Through 2-Stage VAE

Variational autoencoder (VAE) is a popular method for drug discovery and...
research
03/25/2023

Deep Kernel Methods Learn Better: From Cards to Process Optimization

The ability of deep learning methods to perform classification and regre...
research
02/24/2023

Imputing Knowledge Tracing Data with Subject-Based Training via LSTM Variational Autoencoders Frameworks

The issue of missing data poses a great challenge on boosting performanc...
research
06/11/2020

A Generalised Linear Model Framework for Variational Autoencoders based on Exponential Dispersion Families

Although variational autoencoders (VAE) are successfully used to obtain ...
research
02/03/2022

SubOmiEmbed: Self-supervised Representation Learning of Multi-omics Data for Cancer Type Classification

For personalized medicines, very crucial intrinsic information is presen...

Please sign up or login with your details

Forgot password? Click here to reset