Unsupervised Learning of slow features for Data Efficient Regression

12/11/2020
by   Oliver Struckmeier, et al.
0

Research in computational neuroscience suggests that the human brain's unparalleled data efficiency is a result of highly efficient mechanisms to extract and organize slowly changing high level features from continuous sensory inputs. In this paper, we apply this slowness principle to a state of the art representation learning method with the goal of performing data efficient learning of down-stream regression tasks. To this end, we propose the slow variational autoencoder (S-VAE), an extension to the β-VAE which applies a temporal similarity constraint to the latent representations. We empirically compare our method to the β-VAE and the Temporal Difference VAE (TD-VAE), a state-of-the-art method for next frame prediction in latent space with temporal abstraction. We evaluate the three methods against their data-efficiency on down-stream tasks using a synthetic 2D ball tracking dataset, a dataset from a reinforcent learning environment and a dataset generated using the DeepMind Lab environment. In all tasks, the proposed method outperformed the baselines both with dense and especially sparse labeled data. The S-VAE achieved similar or better performance compared to the baselines with 20% to 93% less data.

READ FULL TEXT

page 4

page 8

research
10/24/2021

Regularizing Variational Autoencoder with Diversity and Uncertainty Awareness

As one of the most popular generative models, Variational Autoencoder (V...
research
03/04/2020

q-VAE for Disentangled Representation Learning and Latent Dynamical Systems

This paper proposes a novel variational autoencoder (VAE) derived from T...
research
04/22/2020

Polarized-VAE: Proximity Based Disentangled Representation Learning for Text Generation

Learning disentangled representations of real world data is a challengin...
research
11/18/2022

Hub-VAE: Unsupervised Hub-based Regularization of Variational Autoencoders

Exemplar-based methods rely on informative data points or prototypes to ...
research
05/30/2022

Temporal Latent Bottleneck: Synthesis of Fast and Slow Processing Mechanisms in Sequence Learning

Recurrent neural networks have a strong inductive bias towards learning ...
research
07/12/2023

Reconstructing Spatiotemporal Data with C-VAEs

The continuous representation of spatiotemporal data commonly relies on ...
research
09/12/2019

Generating Data using Monte Carlo Dropout

For many analytical problems the challenge is to handle huge amounts of ...

Please sign up or login with your details

Forgot password? Click here to reset