DAVA: Disentangling Adversarial Variational Autoencoder

03/02/2023
by   Benjamin Estermann, et al.
0

The use of well-disentangled representations offers many advantages for downstream tasks, e.g. an increased sample efficiency, or better interpretability. However, the quality of disentangled interpretations is often highly dependent on the choice of dataset-specific hyperparameters, in particular the regularization strength. To address this issue, we introduce DAVA, a novel training procedure for variational auto-encoders. DAVA completely alleviates the problem of hyperparameter selection. We compare DAVA to models with optimal hyperparameters. Without any hyperparameter tuning, DAVA is competitive on a diverse range of commonly used datasets. Underlying DAVA, we discover a necessary condition for unsupervised disentanglement, which we call PIPE. We demonstrate the ability of PIPE to positively predict the performance of downstream models in abstract reasoning. We also thoroughly investigate correlations with existing supervised and unsupervised metrics. The code is available at https://github.com/besterma/dava.

READ FULL TEXT

page 1

page 5

page 13

page 14

page 15

page 16

page 18

research
05/23/2023

Disentangled Variational Autoencoder for Emotion Recognition in Conversations

In Emotion Recognition in Conversations (ERC), the emotions of target ut...
research
03/22/2022

Task-guided Disentangled Tuning for Pretrained Language Models

Pretrained language models (PLMs) trained on large-scale unlabeled corpu...
research
02/08/2021

DEFT: Distilling Entangled Factors

Disentanglement is a highly desirable property of representation due to ...
research
07/14/2020

Failure Modes of Variational Autoencoders and Their Effects on Downstream Tasks

Variational Auto-encoders (VAEs) are deep generative latent variable mod...
research
02/11/2021

Disentangled Representations from Non-Disentangled Models

Constructing disentangled representations is known to be a difficult tas...
research
08/03/2023

Unsupervised Multiplex Graph Learning with Complementary and Consistent Information

Unsupervised multiplex graph learning (UMGL) has been shown to achieve s...
research
05/17/2022

How do Variational Autoencoders Learn? Insights from Representational Similarity

The ability of Variational Autoencoders (VAEs) to learn disentangled rep...

Please sign up or login with your details

Forgot password? Click here to reset