Dimensionality Reduction and Reconstruction using Mirroring Neural Networks and Object Recognition based on Reduced Dimension Characteristic Vector

12/06/2007
by   Dasika Ratna Deepthi, et al.
0

In this paper, we present a Mirroring Neural Network architecture to perform non-linear dimensionality reduction and Object Recognition using a reduced lowdimensional characteristic vector. In addition to dimensionality reduction, the network also reconstructs (mirrors) the original high-dimensional input vector from the reduced low-dimensional data. The Mirroring Neural Network architecture has more number of processing elements (adalines) in the outer layers and the least number of elements in the central layer to form a converging-diverging shape in its configuration. Since this network is able to reconstruct the original image from the output of the innermost layer (which contains all the information about the input pattern), these outputs can be used as object signature to classify patterns. The network is trained to minimize the discrepancy between actual output and the input by back propagating the mean squared error from the output layer to the input layer. After successfully training the network, it can reduce the dimension of input vectors and mirror the patterns fed to it. The Mirroring Neural Network architecture gave very good results on various test patterns.

READ FULL TEXT
research
10/18/2021

A Dimensionality Reduction Approach for Convolutional Neural Networks

The focus of this paper is the application of classical model order redu...
research
04/08/2022

Dimensionality Reduction in Deep Learning via Kronecker Multi-layer Architectures

Deep learning using neural networks is an effective technique for genera...
research
07/26/2018

Premise selection with neural networks and distributed representation of features

We present the problem of selecting relevant premises for a proof of a g...
research
12/22/2018

Random Projection in Deep Neural Networks

This work investigates the ways in which deep learning methods can benef...
research
04/21/2020

On the Compressive Power of Boolean Threshold Autoencoders

An autoencoder is a layered neural network whose structure can be viewed...
research
02/19/2021

Center Smoothing for Certifiably Robust Vector-Valued Functions

Randomized smoothing has been successfully applied in high-dimensional i...
research
07/20/2021

NeurObfuscator: A Full-stack Obfuscation Tool to Mitigate Neural Architecture Stealing

Neural network stealing attacks have posed grave threats to neural netwo...

Please sign up or login with your details

Forgot password? Click here to reset