DeepAI AI Chat
Log In Sign Up

Walking the Tightrope: An Investigation of the Convolutional Autoencoder Bottleneck

11/18/2019
by   Ilja Manakov, et al.
Siemens AG
Universität München
35

In this paper, we present an in-depth investigation of the convolutional autoencoder (CAE) bottleneck. Autoencoders (AE), and especially their convolutional variants, play a vital role in the current deep learning toolbox. Researchers and practitioners employ CAEs for a variety of tasks, ranging from outlier detection and compression to transfer and representation learning. Despite their widespread adoption, we have limited insight into how the bottleneck shape impacts the emergent properties of the CAE. We demonstrate that increased height and width of the bottleneck drastically improves generalization, which in turn leads to better performance of the latent codes in downstream transfer learning tasks. The number of channels in the bottleneck, on the other hand, is secondary in importance. Furthermore, we show empirically that, contrary to popular belief, CAEs do not learn to copy their input, even when the bottleneck has the same number of neurons as there are pixels in the input. Copying does not occur, despite training the CAE for 1,000 epochs on a tiny (≈ 600 images) dataset. We believe that the findings in this paper are directly applicable and will lead to improvements in models that rely on CAEs.

READ FULL TEXT

page 7

page 8

page 9

page 13

page 14

page 15

03/20/2023

Training Invertible Neural Networks as Autoencoders

Autoencoders are able to learn useful data representations in an unsuper...
07/02/2020

ReXNet: Diminishing Representational Bottleneck on Convolutional Neural Network

This paper addresses representational bottleneck in a network and propos...
10/26/2017

Image Compression: Sparse Coding vs. Bottleneck Autoencoders

Bottleneck autoencoders have been actively researched as a solution to i...
07/18/2018

Metric Embedding Autoencoders for Unsupervised Cross-Dataset Transfer Learning

Cross-dataset transfer learning is an important problem in person re-ide...
03/13/2020

What Information Does a ResNet Compress?

The information bottleneck principle (Shwartz-Ziv Tishby, 2017) sugg...
03/02/2023

Error mitigation of entangled states using brainbox quantum autoencoders

Current quantum hardware is subject to various sources of noise that lim...
02/25/2022

Do autoencoders need a bottleneck for anomaly detection?

A common belief in designing deep autoencoders (AEs), a type of unsuperv...