Improving Generalization for Abstract Reasoning Tasks Using Disentangled Feature Representations

11/12/2018
by   Xander Steenbrugge, et al.
0

In this work we explore the generalization characteristics of unsupervised representation learning by leveraging disentangled VAE's to learn a useful latent space on a set of relational reasoning problems derived from Raven Progressive Matrices. We show that the latent representations, learned by unsupervised training using the right objective function, significantly outperform the same architectures trained with purely supervised learning, especially when it comes to generalization.

READ FULL TEXT
research
05/29/2019

Are Disentangled Representations Helpful for Abstract Visual Reasoning?

A disentangled representation encodes information about the salient fact...
research
07/22/2019

Semi-Supervised Learning by Disentangling and Self-Ensembling Over Stochastic Latent Space

The success of deep learning in medical imaging is mostly achieved at th...
research
03/04/2020

q-VAE for Disentangled Representation Learning and Latent Dynamical Systems

This paper proposes a novel variational autoencoder (VAE) derived from T...
research
02/11/2021

Disentangled Representations from Non-Disentangled Models

Constructing disentangled representations is known to be a difficult tas...
research
04/06/2018

Hierarchical Disentangled Representations

Deep latent-variable models learn representations of high-dimensional da...
research
12/04/2018

A Spectral Regularizer for Unsupervised Disentanglement

Generative models that learn to associate variations in the output along...
research
08/26/2020

Orientation-Disentangled Unsupervised Representation Learning for Computational Pathology

Unsupervised learning enables modeling complex images without the need f...

Please sign up or login with your details

Forgot password? Click here to reset