Reducing Redundancy in the Bottleneck Representation of the Autoencoders

02/09/2022
by   Firas Laakom, et al.
18

Autoencoders are a type of unsupervised neural networks, which can be used to solve various tasks, e.g., dimensionality reduction, image compression, and image denoising. An AE has two goals: (i) compress the original input to a low-dimensional space at the bottleneck of the network topology using an encoder, (ii) reconstruct the input from the representation at the bottleneck using a decoder. Both encoder and decoder are optimized jointly by minimizing a distortion-based loss which implicitly forces the model to keep only those variations of input data that are required to reconstruct the and to reduce redundancies. In this paper, we propose a scheme to explicitly penalize feature redundancies in the bottleneck representation. To this end, we propose an additional loss term, based on the pair-wise correlation of the neurons, which complements the standard reconstruction loss forcing the encoder to learn a more diverse and richer representation of the input. We tested our approach across different tasks: dimensionality reduction using three different dataset, image compression using the MNIST dataset, and image denoising using fashion MNIST. The experimental results show that the proposed loss leads consistently to superior performance compared to the standard AE loss.

READ FULL TEXT

page 1

page 4

page 5

research
09/04/2023

Are We Using Autoencoders in a Wrong Way?

Autoencoders are certainly among the most studied and used Deep Learning...
research
08/15/2022

On a Mechanism Framework of Autoencoders

This paper proposes a theoretical framework on the mechanism of autoenco...
research
03/15/2021

Data Discovery Using Lossless Compression-Based Sparse Representation

Sparse representation has been widely used in data compression, signal a...
research
09/11/2023

Data efficiency, dimensionality reduction, and the generalized symmetric information bottleneck

The Symmetric Information Bottleneck (SIB), an extension of the more fam...
research
04/28/2022

Representative period selection for power system planning using autoencoder-based dimensionality reduction

Power sector capacity expansion models (CEMs) that are used for studying...
research
05/06/2020

Stochastic Bottleneck: Rateless Auto-Encoder for Flexible Dimensionality Reduction

We propose a new concept of rateless auto-encoders (RL-AEs) that enable ...
research
02/15/2021

Scalable Vector Gaussian Information Bottleneck

In the context of statistical learning, the Information Bottleneck metho...

Please sign up or login with your details

Forgot password? Click here to reset