Towards Composable Distributions of Latent Space Augmentations

03/06/2023
by   Omead Pooladzandi, et al.
0

We propose a composable framework for latent space image augmentation that allows for easy combination of multiple augmentations. Image augmentation has been shown to be an effective technique for improving the performance of a wide variety of image classification and generation tasks. Our framework is based on the Variational Autoencoder architecture and uses a novel approach for augmentation via linear transformation within the latent space itself. We explore losses and augmentation latent geometry to enforce the transformations to be composable and involuntary, thus allowing the transformations to be readily combined or inverted. Finally, we show these properties are better performing with certain pairs of augmentations, but we can transfer the latent space to other sets of augmentations to modify performance, effectively constraining the VAE's bottleneck to preserve the variance of specific augmentations and features of the image which we care about. We demonstrate the effectiveness of our approach with initial results on the MNIST dataset against both a standard VAE and a Conditional VAE. This latent augmentation method allows for much greater control and geometric interpretability of the latent space, making it a valuable tool for researchers and practitioners in the field.

READ FULL TEXT

page 4

page 5

page 8

page 9

page 10

page 11

page 12

page 13

research
07/14/2017

GLSR-VAE: Geodesic Latent Space Regularization for Variational AutoEncoder Architectures

VAEs (Variational AutoEncoders) have proved to be powerful in the contex...
research
03/23/2023

High Fidelity Image Synthesis With Deep VAEs In Latent Space

We present fast, realistic image generation on high-resolution, multimod...
research
04/09/2020

Exemplar VAEs for Exemplar based Generation and Data Augmentation

This paper presents a framework for exemplar based generative modeling, ...
research
03/25/2021

Data Generation in Low Sample Size Setting Using Manifold Sampling and a Geometry-Aware VAE

While much efforts have been focused on improving Variational Autoencode...
research
06/23/2020

Projective Latent Space Decluttering

High-dimensional latent representations learned by neural network classi...
research
10/12/2022

Quasi-symbolic explanatory NLI via disentanglement: A geometrical examination

Disentangling the encodings of neural models is a fundamental aspect for...
research
02/04/2023

PartitionVAE – a human-interpretable VAE

VAEs, or variational autoencoders, are autoencoders that explicitly lear...

Please sign up or login with your details

Forgot password? Click here to reset