Lets keep it simple, Using simple architectures to outperform deeper and more complex architectures

08/22/2016
by   Seyyed Hossein Hasanpour, et al.
0

Major winning Convolutional Neural Networks (CNNs), such as AlexNet, VGGNet, ResNet, GoogleNet, include tens to hundreds of millions of parameters, which impose considerable computation and memory overhead. This limits their practical use for training, optimization and memory efficiency. On the contrary, light-weight architectures, being proposed to address this issue, mainly suffer from low accuracy. These inefficiencies mostly stem from following an ad hoc procedure. We propose a simple architecture, called SimpleNet, based on a set of designing principles and we empirically show that SimpleNet provides a good tradeoff between the computation/memory efficiency and the accuracy. Our simple 13-layer architecture outperforms most of the deeper and complex architectures to date such as VGGNet, ResNet, and GoogleNet on several well-known benchmarks while having 2 to 25 times fewer number of parameters and operations. This makes it very handy for embedded system or system with computational and memory limitations. We achieved state-of-the-art result on standard data sets such as CIFAR10 outperforming several heavier architectures including but not limited to AlexNet on ImageNet and very good results on data sets such as CIFAR100, MNIST and SVHN. In our experiments we show that SimpleNet is more efficient in terms of computation and memory overhead compared to state of the art. Models are made available at: https://github.com/Coderx7/SimpleNet

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset