DeepAI AI Chat
Log In Sign Up

Unsupervised Learning of slow features for Data Efficient Regression

12/11/2020
by   Oliver Struckmeier, et al.
0

Research in computational neuroscience suggests that the human brain's unparalleled data efficiency is a result of highly efficient mechanisms to extract and organize slowly changing high level features from continuous sensory inputs. In this paper, we apply this slowness principle to a state of the art representation learning method with the goal of performing data efficient learning of down-stream regression tasks. To this end, we propose the slow variational autoencoder (S-VAE), an extension to the β-VAE which applies a temporal similarity constraint to the latent representations. We empirically compare our method to the β-VAE and the Temporal Difference VAE (TD-VAE), a state-of-the-art method for next frame prediction in latent space with temporal abstraction. We evaluate the three methods against their data-efficiency on down-stream tasks using a synthetic 2D ball tracking dataset, a dataset from a reinforcent learning environment and a dataset generated using the DeepMind Lab environment. In all tasks, the proposed method outperformed the baselines both with dense and especially sparse labeled data. The S-VAE achieved similar or better performance compared to the baselines with 20% to 93% less data.

READ FULL TEXT

page 4

page 8

10/24/2021

Regularizing Variational Autoencoder with Diversity and Uncertainty Awareness

As one of the most popular generative models, Variational Autoencoder (V...
12/06/2021

Feature Disentanglement of Robot Trajectories

Modeling trajectories generated by robot joints is complex and required ...
03/04/2020

q-VAE for Disentangled Representation Learning and Latent Dynamical Systems

This paper proposes a novel variational autoencoder (VAE) derived from T...
11/18/2022

Hub-VAE: Unsupervised Hub-based Regularization of Variational Autoencoders

Exemplar-based methods rely on informative data points or prototypes to ...
05/30/2022

Temporal Latent Bottleneck: Synthesis of Fast and Slow Processing Mechanisms in Sequence Learning

Recurrent neural networks have a strong inductive bias towards learning ...
03/31/2019

Variational Adversarial Active Learning

Active learning aims to develop label-efficient algorithms by sampling t...
09/12/2019

Generating Data using Monte Carlo Dropout

For many analytical problems the challenge is to handle huge amounts of ...