QuickNet: Maximizing Efficiency and Efficacy in Deep Architectures

01/09/2017
by   Tapabrata Ghosh, et al.
0

We present QuickNet, a fast and accurate network architecture that is both faster and significantly more accurate than other fast deep architectures like SqueezeNet. Furthermore, it uses less parameters than previous networks, making it more memory efficient. We do this by making two major modifications to the reference Darknet model (Redmon et al, 2015): 1) The use of depthwise separable convolutions and 2) The use of parametric rectified linear units. We make the observation that parametric rectified linear units are computationally equivalent to leaky rectified linear units at test time and the observation that separable convolutions can be interpreted as a compressed Inception network (Chollet, 2016). Using these observations, we derive a network architecture, which we call QuickNet, that is both faster and more accurate than previous models. Our architecture provides at least four major advantages: (1) A smaller model size, which is more tenable on memory constrained systems; (2) A significantly faster network which is more tenable on computationally constrained systems; (3) A high accuracy of 95.7 percent on the CIFAR-10 Dataset which outperforms all but one result published so far, although we note that our works are orthogonal approaches and can be combined (4) Orthogonality to previous model compression approaches allowing for further speed gains to be realized.

READ FULL TEXT
research
10/07/2016

Xception: Deep Learning with Depthwise Separable Convolutions

We present an interpretation of Inception modules in convolutional neura...
research
01/16/2017

Towards a New Interpretation of Separable Convolutions

In recent times, the use of separable convolutions in deep convolutional...
research
11/23/2015

Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)

We introduce the "exponential linear unit" (ELU) which speeds up learnin...
research
02/12/2021

Depthwise Separable Convolutions Allow for Fast and Memory-Efficient Spectral Normalization

An increasing number of models require the control of the spectral norm ...
research
11/25/2017

CondenseNet: An Efficient DenseNet using Learned Group Convolutions

Deep neural networks are increasingly used on mobile devices, where comp...
research
08/23/2021

Separable Convolutions for Optimizing 3D Stereo Networks

Deep learning based 3D stereo networks give superior performance compare...

Please sign up or login with your details

Forgot password? Click here to reset