Encouraging Disentangled and Convex Representation with Controllable Interpolation Regularization

12/06/2021
by   Yunhao Ge, et al.
4

We focus on controllable disentangled representation learning (C-Dis-RL), where users can control the partition of the disentangled latent space to factorize dataset attributes (concepts) for downstream tasks. Two general problems remain under-explored in current methods: (1) They lack comprehensive disentanglement constraints, especially missing the minimization of mutual information between different attributes across latent and observation domains. (2) They lack convexity constraints in disentangled latent space, which is important for meaningfully manipulating specific attributes for downstream tasks. To encourage both comprehensive C-Dis-RL and convexity simultaneously, we propose a simple yet efficient method: Controllable Interpolation Regularization (CIR), which creates a positive loop where the disentanglement and convexity can help each other. Specifically, we conduct controlled interpolation in latent space during training and 'reuse' the encoder to help form a 'perfect disentanglement' regularization. In that case, (a) disentanglement loss implicitly enlarges the potential 'understandable' distribution to encourage convexity; (b) convexity can in turn improve robust and precise disentanglement. CIR is a general module and we merge CIR with three different algorithms: ELEGANT, I2I-Dis, and GZS-Net to show the compatibility and effectiveness. Qualitative and quantitative experiments show improvement in C-Dis-RL and latent convexity by CIR. This further improves downstream tasks: controllable image synthesis, cross-modality image translation and zero-shot synthesis. More experiments demonstrate CIR can also improve other downstream tasks, such as new attribute value mining, data augmentation, and eliminating bias for fairness.

READ FULL TEXT

page 5

page 6

page 10

page 11

page 12

page 13

research
10/23/2021

Group-disentangled Representation Learning with Weakly-Supervised Regularization

Learning interpretable and human-controllable representations that uncov...
research
10/13/2021

Revisiting Latent-Space Interpolation via a Quantitative Evaluation Framework

Latent-space interpolation is commonly used to demonstrate the generaliz...
research
05/20/2023

Model Debiasing via Gradient-based Explanation on Representation

Machine learning systems produce biased results towards certain demograp...
research
05/17/2022

Monotonicity Regularization: Improved Penalties and Novel Applications to Disentangled Representation Learning and Robust Classification

We study settings where gradient penalties are used alongside risk minim...
research
07/31/2021

Fair Representation Learning using Interpolation Enabled Disentanglement

With the growing interest in the machine learning community to solve rea...
research
10/12/2022

Quasi-symbolic explanatory NLI via disentanglement: A geometrical examination

Disentangling the encodings of neural models is a fundamental aspect for...
research
11/15/2019

Gated Variational AutoEncoders: Incorporating Weak Supervision to Encourage Disentanglement

Variational AutoEncoders (VAEs) provide a means to generate representati...

Please sign up or login with your details

Forgot password? Click here to reset