Learning Convolutional Neural Networks using Hybrid Orthogonal Projection and Estimation

06/20/2016
by   Hengyue Pan, et al.
0

Convolutional neural networks (CNNs) have yielded the excellent performance in a variety of computer vision tasks, where CNNs typically adopt a similar structure consisting of convolution layers, pooling layers and fully connected layers. In this paper, we propose to apply a novel method, namely Hybrid Orthogonal Projection and Estimation (HOPE), to CNNs in order to introduce orthogonality into the CNN structure. The HOPE model can be viewed as a hybrid model to combine feature extraction using orthogonal linear projection with mixture models. It is an effective model to extract useful information from the original high-dimension feature vectors and meanwhile filter out irrelevant noises. In this work, we present three different ways to apply the HOPE models to CNNs, i.e., HOPE-Input, single-HOPE-Block and multi-HOPE-Blocks. For HOPE-Input CNNs, a HOPE layer is directly used right after the input to de-correlate high-dimension input feature vectors. Alternatively, in single-HOPE-Block and multi-HOPE-Blocks CNNs, we consider to use HOPE layers to replace one or more blocks in the CNNs, where one block may include several convolutional layers and one pooling layer. The experimental results on both Cifar-10 and Cifar-100 data sets have shown that the orthogonal constraints imposed by the HOPE layers can significantly improve the performance of CNNs in these image classification tasks (we have achieved one of the best performance when image augmentation has not been applied, and top 5 performance with image augmentation).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/03/2015

Hybrid Orthogonal Projection and Estimation (HOPE): A New Framework to Probe and Learn Neural Networks

In this paper, we propose a novel model for high-dimensional data, calle...
research
09/22/2020

Kernelized dense layers for facial expression recognition

Fully connected layer is an essential component of Convolutional Neural ...
research
11/17/2016

Factorized Bilinear Models for Image Recognition

Although Deep Convolutional Neural Networks (CNNs) have liberated their ...
research
12/13/2019

Fully-Convolutional Intensive Feature Flow Neural Network for Text Recognition

The Deep Convolutional Neural Networks (CNNs) have obtained a great succ...
research
12/07/2018

Harmonic Networks: Integrating Spectral Information into CNNs

Convolutional neural networks (CNNs) learn filters in order to capture l...
research
03/20/2017

Second-order Convolutional Neural Networks

Convolutional Neural Networks (CNNs) have been successfully applied to m...
research
04/23/2018

N-fold Superposition: Improving Neural Networks by Reducing the Noise in Feature Maps

Considering the use of Fully Connected (FC) layer limits the performance...

Please sign up or login with your details

Forgot password? Click here to reset