Stacking-Based Deep Neural Network: Deep Analytic Network for Pattern Classification

11/17/2018
by   Cheng Yaw Low, et al.
0

Stacking-based deep neural network (S-DNN) is aggregated with pluralities of basic learning modules, one after another, to synthesize a deep neural network (DNN) alternative for pattern classification. Contrary to the DNNs trained end to end by backpropagation (BP), each S-DNN layer, i.e., a self-learnable module, is to be trained decisively and independently without BP intervention. In this paper, a ridge regression-based S-DNN, dubbed deep analytic network (DAN), along with its kernelization (K-DAN), are devised for multilayer feature re-learning from the pre-extracted baseline features and the structured features. Our theoretical formulation demonstrates that DAN/K-DAN re-learn by perturbing the intra/inter-class variations, apart from diminishing the prediction errors. We scrutinize the DAN/K-DAN performance for pattern classification on datasets of varying domains - faces, handwritten digits, generic objects, to name a few. Unlike the typical BP-optimized DNNs to be trained from gigantic datasets by GPU, we disclose that DAN/K-DAN are trainable using only CPU even for small-scale training sets. Our experimental results disclose that DAN/K-DAN outperform the present S-DNNs and also the BP-trained DNNs, including multiplayer perceptron, deep belief network, etc., without data augmentation applied.

READ FULL TEXT

page 1

page 14

research
03/04/2017

Stacking-based Deep Neural Network: Deep Analytic Network on Convolutional Spectral Histogram Features

Stacking-based deep neural network (S-DNN), in general, denotes a deep n...
research
04/02/2018

Improving Massive MIMO Belief Propagation Detector with Deep Neural Network

In this paper, deep neural network (DNN) is utilized to improve the beli...
research
02/14/2022

Analytic Learning of Convolutional Neural Network For Pattern Recognition

Training convolutional neural networks (CNNs) with back-propagation (BP)...
research
01/08/2022

PocketNN: Integer-only Training and Inference of Neural Networks via Direct Feedback Alignment and Pocket Activations in Pure C++

Standard deep learning algorithms are implemented using floating-point r...
research
01/30/2019

On Correlation of Features Extracted by Deep Neural Networks

Redundancy in deep neural network (DNN) models has always been one of th...
research
11/30/2021

Leveraging The Topological Consistencies of Learning in Deep Neural Networks

Recently, methods have been developed to accurately predict the testing ...
research
01/05/2021

Understanding the Ability of Deep Neural Networks to Count Connected Components in Images

Humans can count very fast by subitizing, but slow substantially as the ...

Please sign up or login with your details

Forgot password? Click here to reset