Neural Random Projection: From the Initial Task To the Input Similarity Problem

10/09/2020
by   Alan Savushkin, et al.
0

In this paper, we propose a novel approach for implicit data representation to evaluate similarity of input data using a trained neural network. In contrast to the previous approach, which uses gradients for representation, we utilize only the outputs from the last hidden layer of a neural network and do not use a backward step. The proposed technique explicitly takes into account the initial task and significantly reduces the size of the vector representation, as well as the computation time. The key point is minimization of information loss between layers. Generally, a neural network discards information that is not related to the problem, which makes the last hidden layer representation useless for input similarity task. In this work, we consider two main causes of information loss: correlation between neurons and insufficient size of the last hidden layer. To reduce the correlation between neurons we use orthogonal weight initialization for each layer and modify the loss function to ensure orthogonality of the weights during training. Moreover, we show that activation functions can potentially increase correlation. To solve this problem, we apply modified Batch-Normalization with Dropout. Using orthogonal weight matrices allow us to consider such neural networks as an application of the Random Projection method and get a lower bound estimate for the size of the last hidden layer. We perform experiments on MNIST and physical examination datasets. In both experiments, initially, we split a set of labels into two disjoint subsets to train a neural network for binary classification problem, and then use this model to measure similarity between input data and define hidden classes. Our experimental results show that the proposed approach achieves competitive results on the input similarity task while reducing both computation time and the size of the input representation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/28/2020

Estimating Multiplicative Relations in Neural Networks

Universal approximation theorem suggests that a shallow neural network c...
research
05/16/2020

An Effective and Efficient Training Algorithm for Multi-layer Feedforward Neural Networks

Network initialization is the first and critical step for training neura...
research
06/10/2021

Within-layer Diversity Reduces Generalization Gap

Neural networks are composed of multiple layers arranged in a hierarchic...
research
12/22/2019

Universal Hysteresis Identification Using Extended Preisach Neural Network

Hysteresis phenomena have been observed in different branches of physics...
research
07/24/2023

Early Neuron Alignment in Two-layer ReLU Networks with Small Initialization

This paper studies the problem of training a two-layer ReLU network for ...
research
01/22/2022

Neuronal Correlation: a Central Concept in Neural Network

This paper proposes to study neural networks through neuronal correlatio...
research
04/30/2020

Binary autoencoder with random binary weights

Here is presented an analysis of an autoencoder with binary activations ...

Please sign up or login with your details

Forgot password? Click here to reset