On the Compressive Power of Boolean Threshold Autoencoders

04/21/2020
by   Avraham A. Melkman, et al.
0

An autoencoder is a layered neural network whose structure can be viewed as consisting of an encoder, which compresses an input vector of dimension D to a vector of low dimension d, and a decoder which transforms the low-dimensional vector back to the original input vector (or one that is very similar). In this paper we explore the compressive power of autoencoders that are Boolean threshold networks by studying the numbers of nodes and layers that are required to ensure that the numbers of nodes and layers that are required to ensure that each vector in a given set of distinct input binary vectors is transformed back to its original. We show that for any set of n distinct vectors there exists a seven-layer autoencoder with the smallest possible middle layer, (i.e., its size is logarithmic in n), but that there is a set of n vectors for which there is no three-layer autoencoder with a middle layer of the same size. In addition we present a kind of trade-off: if a considerably larger middle layer is permissible then a five-layer autoencoder does exist. We also study encoding by itself. The results we obtain suggest that it is the decoding that constitutes the bottleneck of autoencoding. For example, there always is a three-layer Boolean threshold encoder that compresses n vectors into a dimension that is reduced to twice the logarithm of n.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/21/2021

On the Size and Width of the Decoder of a Boolean Threshold Autoencoder

In this paper, we study the size and width of autoencoders consisting of...
research
11/11/2018

About the ordinances of the vectors of the n-dimensional Boolean cube in accordance with their weights

The problem "Given a Boolean function f of n variables by its truth tabl...
research
01/28/2022

Graph autoencoder with constant dimensional latent space

Invertible transformation of large graphs into constant dimensional vect...
research
11/23/2018

Estimating of the inertial manifold dimension for a chaotic attractor of complex Ginzburg-Landau equation using a neural network

Dimension of an inertial manifold for a chaotic attractor of spatially d...
research
02/06/2020

Residual-Recursion Autoencoder for Shape Illustration Images

Shape illustration images (SIIs) are common and important in describing ...
research
01/30/2021

Metalearning: Sparse Variable-Structure Automata

Dimension of the encoder output (i.e., the code layer) in an autoencoder...

Please sign up or login with your details

Forgot password? Click here to reset