AAVAE: Augmentation-Augmented Variational Autoencoders

07/26/2021
by   William Falcon, et al.
4

Recent methods for self-supervised learning can be grouped into two paradigms: contrastive and non-contrastive approaches. Their success can largely be attributed to data augmentation pipelines which generate multiple views of a single input that preserve the underlying semantics. In this work, we introduce augmentation-augmented variational autoencoders (AAVAE), a third approach to self-supervised learning based on autoencoding. We derive AAVAE starting from the conventional variational autoencoder (VAE), by replacing the KL divergence regularization, which is agnostic to the input domain, with data augmentations that explicitly encourage the internal representations to encode domain-specific invariances and equivariances. We empirically evaluate the proposed AAVAE on image classification, similar to how recent contrastive and non-contrastive learning algorithms have been evaluated. Our experiments confirm the effectiveness of data augmentation as a replacement for KL divergence regularization. The AAVAE outperforms the VAE by 30 40 state-of-the-art for self-supervised learning.

READ FULL TEXT
research
11/09/2020

Towards Domain-Agnostic Contrastive Learning

Despite recent success, most contrastive self-supervised learning method...
research
09/06/2023

CR-VAE: Contrastive Regularization on Variational Autoencoders for Preventing Posterior Collapse

The Variational Autoencoder (VAE) is known to suffer from the phenomenon...
research
08/31/2021

ScatSimCLR: self-supervised contrastive learning with pretext task regularization for small-scale datasets

In this paper, we consider a problem of self-supervised learning for sma...
research
03/25/2023

Deep Augmentation: Enhancing Self-Supervised Learning through Transformations in Higher Activation Space

We introduce Deep Augmentation, an approach to data augmentation using d...
research
06/10/2022

Masked Autoencoders are Robust Data Augmentors

Deep neural networks are capable of learning powerful representations to...
research
03/20/2022

Partitioning Image Representation in Contrastive Learning

In contrastive learning in the image domain, the anchor and positive sam...
research
10/30/2022

Saliency Can Be All You Need In Contrastive Self-Supervised Learning

We propose an augmentation policy for Contrastive Self-Supervised Learni...

Please sign up or login with your details

Forgot password? Click here to reset