Walking the Tightrope: An Investigation of the Convolutional Autoencoder Bottleneck

11/18/2019
by   Ilja Manakov, et al.
35

In this paper, we present an in-depth investigation of the convolutional autoencoder (CAE) bottleneck. Autoencoders (AE), and especially their convolutional variants, play a vital role in the current deep learning toolbox. Researchers and practitioners employ CAEs for a variety of tasks, ranging from outlier detection and compression to transfer and representation learning. Despite their widespread adoption, we have limited insight into how the bottleneck shape impacts the emergent properties of the CAE. We demonstrate that increased height and width of the bottleneck drastically improves generalization, which in turn leads to better performance of the latent codes in downstream transfer learning tasks. The number of channels in the bottleneck, on the other hand, is secondary in importance. Furthermore, we show empirically that, contrary to popular belief, CAEs do not learn to copy their input, even when the bottleneck has the same number of neurons as there are pixels in the input. Copying does not occur, despite training the CAE for 1,000 epochs on a tiny (≈ 600 images) dataset. We believe that the findings in this paper are directly applicable and will lead to improvements in models that rely on CAEs.

READ FULL TEXT

page 7

page 8

page 9

page 13

page 14

page 15

research
03/20/2023

Training Invertible Neural Networks as Autoencoders

Autoencoders are able to learn useful data representations in an unsuper...
research
07/02/2020

ReXNet: Diminishing Representational Bottleneck on Convolutional Neural Network

This paper addresses representational bottleneck in a network and propos...
research
10/26/2017

Image Compression: Sparse Coding vs. Bottleneck Autoencoders

Bottleneck autoencoders have been actively researched as a solution to i...
research
08/19/2023

Effects of Convolutional Autoencoder Bottleneck Width on StarGAN-based Singing Technique Conversion

Singing technique conversion (STC) refers to the task of converting from...
research
07/18/2018

Metric Embedding Autoencoders for Unsupervised Cross-Dataset Transfer Learning

Cross-dataset transfer learning is an important problem in person re-ide...
research
03/02/2023

Error mitigation of entangled states using brainbox quantum autoencoders

Current quantum hardware is subject to various sources of noise that lim...
research
02/25/2022

Do autoencoders need a bottleneck for anomaly detection?

A common belief in designing deep autoencoders (AEs), a type of unsuperv...

Please sign up or login with your details

Forgot password? Click here to reset