Lets keep it simple, Using simple architectures to outperform deeper and more complex architectures

08/22/2016
by   Seyyed Hossein Hasanpour, et al.
0

Major winning Convolutional Neural Networks (CNNs), such as AlexNet, VGGNet, ResNet, GoogleNet, include tens to hundreds of millions of parameters, which impose considerable computation and memory overhead. This limits their practical use for training, optimization and memory efficiency. On the contrary, light-weight architectures, being proposed to address this issue, mainly suffer from low accuracy. These inefficiencies mostly stem from following an ad hoc procedure. We propose a simple architecture, called SimpleNet, based on a set of designing principles and we empirically show that SimpleNet provides a good tradeoff between the computation/memory efficiency and the accuracy. Our simple 13-layer architecture outperforms most of the deeper and complex architectures to date such as VGGNet, ResNet, and GoogleNet on several well-known benchmarks while having 2 to 25 times fewer number of parameters and operations. This makes it very handy for embedded system or system with computational and memory limitations. We achieved state-of-the-art result on standard data sets such as CIFAR10 outperforming several heavier architectures including but not limited to AlexNet on ImageNet and very good results on data sets such as CIFAR100, MNIST and SVHN. In our experiments we show that SimpleNet is more efficient in terms of computation and memory overhead compared to state of the art. Models are made available at: https://github.com/Coderx7/SimpleNet

READ FULL TEXT

page 16

page 17

page 18

page 19

page 20

page 21

page 22

research
02/17/2018

Towards Principled Design of Deep Convolutional Networks: Introducing SimpNet

Major winning Convolutional Neural Networks (CNNs), such as VGGNet, ResN...
research
05/20/2016

Deep Roots: Improving CNN Efficiency with Hierarchical Filter Groups

We propose a new method for creating computationally efficient and compa...
research
03/21/2021

MoViNets: Mobile Video Networks for Efficient Video Recognition

We present Mobile Video Networks (MoViNets), a family of computation and...
research
10/27/2020

Memory Optimization for Deep Networks

Deep learning is slowly, but steadily, hitting a memory bottleneck. Whil...
research
07/21/2022

SPIN: An Empirical Evaluation on Sharing Parameters of Isotropic Networks

Recent isotropic networks, such as ConvMixer and vision transformers, ha...
research
07/15/2022

Low-bit Shift Network for End-to-End Spoken Language Understanding

Deep neural networks (DNN) have achieved impressive success in multiple ...
research
11/25/2019

Radius Adaptive Convolutional Neural Network

Convolutional neural network (CNN) is widely used in computer vision app...

Please sign up or login with your details

Forgot password? Click here to reset