Binary autoencoder with random binary weights

04/30/2020
by   Viacheslav Osaulenko, et al.
0

Here is presented an analysis of an autoencoder with binary activations {0, 1} and binary {0, 1} random weights. Such set up puts this model at the intersection of different fields: neuroscience, information theory, sparse coding, and machine learning. It is shown that the sparse activation of the hidden layer arises naturally in order to preserve information between layers. Furthermore, with a large enough hidden layer, it is possible to get zero reconstruction error for any input just by varying the thresholds of neurons. The model preserves the similarity of inputs at the hidden layer that is maximal for the dense hidden layer activation. By analyzing the mutual information between layers it is shown that the difference between sparse and dense representations is related to a memory-computation trade-off. The model is similar to an olfactory perception system of a fruit fly, and the presented theoretical results give useful insights toward understanding more complex neural networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/21/2019

Learning Two layer Networks with Multinomial Activation and High Thresholds

Giving provable guarantees for learning neural networks is a core challe...
research
08/17/2021

KCNet: An Insect-Inspired Single-Hidden-Layer Neural Network with Randomized Binary Weights for Prediction and Classification Tasks

Fruit flies are established model systems for studying olfactory learnin...
research
11/07/2016

Optimal Binary Autoencoding with Pairwise Correlations

We formulate learning of a binary autoencoder as a biconvex optimization...
research
10/02/2020

Are Artificial Dendrites useful in NeuroEvolution?

The significant role of dendritic processing within neuronal networks ha...
research
11/14/2017

The Multi-layer Information Bottleneck Problem

The muti-layer information bottleneck (IB) problem, where information is...
research
10/09/2020

Neural Random Projection: From the Initial Task To the Input Similarity Problem

In this paper, we propose a novel approach for implicit data representat...
research
05/03/2015

Making Sense of Hidden Layer Information in Deep Networks by Learning Hierarchical Targets

This paper proposes an architecture for deep neural networks with hidden...

Please sign up or login with your details

Forgot password? Click here to reset