Eccentric Regularization: Minimizing Hyperspherical Energy without explicit projection

04/23/2021
by   Xuefeng Li, et al.
0

Several regularization methods have recently been introduced which force the latent activations of an autoencoder or deep neural network to conform to either a Gaussian or hyperspherical distribution, or to minimize the implicit rank of the distribution in latent space. In the present work, we introduce a novel regularizing loss function which simulates a pairwise repulsive force between items and an attractive force of each item toward the origin. We show that minimizing this loss function in isolation achieves a hyperspherical distribution. Moreover, when used as a regularizing term, the scaling factor can be adjusted to allow greater flexibility and tolerance of eccentricity, thus allowing the latent variables to be stratified according to their relative importance, while still promoting diversity. We apply this method of Eccentric Regularization to an autoencoder, and demonstrate its effectiveness in image generation, representation learning and downstream classification tasks.

READ FULL TEXT

page 8

page 9

research
02/12/2019

Density Estimation and Incremental Learning of Latent Vector for Generative Autoencoders

In this paper, we treat the image generation task using the autoencoder,...
research
09/04/2023

Are We Using Autoencoders in a Wrong Way?

Autoencoders are certainly among the most studied and used Deep Learning...
research
06/27/2019

Tuning-Free Disentanglement via Projection

In representation learning and non-linear dimension reduction, there is ...
research
02/13/2022

A Group-Equivariant Autoencoder for Identifying Spontaneously Broken Symmetries in the Ising Model

We introduce the group-equivariant autoencoder (GE-autoencoder) – a nove...
research
10/19/2021

Momentum Contrastive Autoencoder: Using Contrastive Learning for Latent Space Distribution Matching in WAE

Wasserstein autoencoder (WAE) shows that matching two distributions is e...
research
12/18/2018

Clustering-Oriented Representation Learning with Attractive-Repulsive Loss

The standard loss function used to train neural network classifiers, cat...
research
04/16/2021

Autoencoder-Based Unequal Error Protection Codes

We present a novel autoencoder-based approach for designing codes that p...

Please sign up or login with your details

Forgot password? Click here to reset