A HMAX with LLC for visual recognition

02/10/2015
by   Kean Hong Lau, et al.
0

Today's high performance deep artificial neural networks (ANNs) rely heavily on parameter optimization, which is sequential in nature and even with a powerful GPU, would have taken weeks to train them up for solving challenging tasks [22]. HMAX [17] has demonstrated that a simple high performing network could be obtained without heavy optimization. In this paper, we had improved on the existing best HMAX neural network [12] in terms of structural simplicity and performance. Our design replaces the L1 minimization sparse coding (SC) with a locality-constrained linear coding (LLC) [20] which has a lower computational demand. We also put the simple orientation filter bank back into the front layer of the network replacing PCA. Our system's performance has improved over the existing architecture and reached 79.0 Caltech-101 [7] dataset, which is state-of-the-art for ANNs (without transfer learning). From our empirical data, the main contributors to our system's performance include an introduction of partial signal whitening, a spot detector, and a spatial pyramid matching (SPM) [14] layer.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/18/2019

Designing recurrent neural networks by unfolding an l1-l1 minimization algorithm

We propose a new deep recurrent neural network (RNN) architecture for se...
research
01/27/2020

Deep learning with coherent nanophotonic circuits

Artificial neural networks are computational network models inspired by ...
research
06/04/2020

Neuroevolutionary Transfer Learning of Deep Recurrent Neural Networks through Network-Aware Adaptation

Transfer learning entails taking an artificial neural network (ANN) that...
research
09/27/2020

Smart Irrigation IoT Solution using Transfer Learning for Neural Networks

In this paper we develop a reliable system for smart irrigation of green...
research
05/11/2015

A Two-Layer Local Constrained Sparse Coding Method for Fine-Grained Visual Categorization

Fine-grained categories are more difficulty distinguished than generic c...
research
06/01/2018

The Nonlinearity Coefficient - Predicting Overfitting in Deep Neural Networks

For a long time, designing neural architectures that exhibit high perfor...
research
09/10/2017

Robust Sparse Coding via Self-Paced Learning

Sparse coding (SC) is attracting more and more attention due to its comp...

Please sign up or login with your details

Forgot password? Click here to reset