A Preliminary Study of Disentanglement With Insights on the Inadequacy of Metrics

11/26/2019
by   Amir H. Abdi, et al.
15

Disentangled encoding is an important step towards a better representation learning. However, despite the numerous efforts, there still is no clear winner that captures the independent features of the data in an unsupervised fashion. In this work we empirically evaluate the performance of six unsupervised disentanglement approaches on the mpi3d toy dataset curated and released for the NeurIPS 2019 Disentanglement Challenge. The methods investigated in this work are Beta-VAE, Factor-VAE, DIP-I-VAE, DIP-II-VAE, Info-VAE, and Beta-TCVAE. The capacities of all models were progressively increased throughout the training and the hyper-parameters were kept intact across experiments. The methods were evaluated based on five disentanglement metrics, namely, DCI, Factor-VAE, IRS, MIG, and SAP-Score. Within the limitations of this study, the Beta-TCVAE approach was found to outperform its alternatives with respect to the normalized sum of metrics. However, a qualitative study of the encoded latents reveal that there is not a consistent correlation between the reported metrics and the disentanglement potential of the model.

READ FULL TEXT

page 6

page 7

page 8

page 9

research
12/11/2019

Variational Learning with Disentanglement-PyTorch

Unsupervised learning of disentangled representations is an open problem...
research
07/17/2023

Evaluating unsupervised disentangled representation learning for genomic discovery and disease risk prediction

High-dimensional clinical data have become invaluable resources for gene...
research
09/14/2023

Dataset Size Dependence of Rate-Distortion Curve and Threshold of Posterior Collapse in Linear VAE

In the Variational Autoencoder (VAE), the variational posterior often al...
research
12/28/2021

Beta-VAE Reproducibility: Challenges and Extensions

β-VAE is a follow-up technique to variational autoencoders that proposes...
research
12/06/2018

β-VAEs can retain label information even at high compression

In this paper, we investigate the degree to which the encoding of a β-VA...
research
03/25/2021

Full Encoder: Make Autoencoders Learn Like PCA

While the beta-VAE family is aiming to find disentangled representations...
research
11/15/2019

Gated Variational AutoEncoders: Incorporating Weak Supervision to Encourage Disentanglement

Variational AutoEncoders (VAEs) provide a means to generate representati...

Please sign up or login with your details

Forgot password? Click here to reset