AVT: Unsupervised Learning of Transformation Equivariant Representations by Autoencoding Variational Transformations

03/23/2019
by   Guo-Jun Qi, et al.
0

The learning of Transformation-Equivariant Representations (TERs), which is introduced by Hinton et al. hinton2011transforming, has been considered as a principle to reveal visual structures under various transformations. It contains the celebrated Convolutional Neural Networks (CNNs) as a special case that only equivary to the translations. In contrast, we seek to train TERs for a generic class of transformations and train them in an unsupervised fashion. To this end, we present a novel principled method by Autoencoding Variational Transformations (AVT), compared with the conventional approach to autoencoding data. Formally, given transformed images, the AVT seeks to train the networks by maximizing the mutual information between the transformations and representations. This ensures the resultant TERs of individual images contain the intrinsic information about their visual structures that would equivary extricably under various transformations. Technically, we show that the resultant optimization problem can be efficiently solved by maximizing a variational lower-bound of the mutual information. This variational approach introduces a transformation decoder to approximate the intractable posterior of transformations, resulting in an autoencoding architecture with a pair of the representation encoder and the transformation decoder. Experiments demonstrate the proposed AVT model sets a new record for the performances on unsupervised tasks, greatly closing the performance gap to the supervised models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/19/2019

Learning Generalized Transformation Equivariant Representations via Autoencoding Transformations

Learning Transformation Equivariant Representations (TERs) seeks to capt...
research
11/19/2019

GraphTER: Unsupervised Learning of Graph Transformation Equivariant Representations via Auto-Encoding Node-wise Transformations

Recent advances in Graph Convolutional Neural Networks (GCNNs) have show...
research
01/14/2019

AET vs. AED: Unsupervised Representation Learning by Auto-Encoding Transformations rather than Data

The success of deep neural networks often relies on a large amount of la...
research
02/04/2020

Graph Representation Learning via Graphical Mutual Information Maximization

The richness in the content of various information networks such as soci...
research
04/12/2020

Feature Lenses: Plug-and-play Neural Modules for Transformation-Invariant Visual Representations

Convolutional Neural Networks (CNNs) are known to be brittle under vario...
research
02/07/2020

Inverse Learning of Symmetry Transformations

Symmetry transformations induce invariances and are a crucial building b...
research
01/31/2022

Rigidity Preserving Image Transformations and Equivariance in Perspective

We characterize the class of image plane transformations which realize r...

Please sign up or login with your details

Forgot password? Click here to reset