-
clcNet: Improving the Efficiency of Convolutional Neural Network using Channel Local Convolutions
Depthwise convolution and grouped convolution has been successfully appl...
read it
-
Dynamic Convolution: Attention over Convolution Kernels
Light-weight convolutional neural networks (CNNs) suffer performance deg...
read it
-
LPRNet: Lightweight Deep Network by Low-rank Pointwise Residual Convolution
Deep learning has become popular in recent years primarily due to the po...
read it
-
The Enhanced Hybrid MobileNet
Although complicated and deep neural network models can achieve high acc...
read it
-
Learning Robust Deep Face Representation
With the development of convolution neural network, more and more resear...
read it
-
WaveletNet: Logarithmic Scale Efficient Convolutional Neural Networks for Edge Devices
We present a logarithmic-scale efficient convolutional neural network ar...
read it
-
Backward Reduction of CNN Models with Information Flow Analysis
This paper proposes backward reduction, an algorithm that explores the c...
read it
MicroNet: Towards Image Recognition with Extremely Low FLOPs
In this paper, we present MicroNet, which is an efficient convolutional neural network using extremely low computational cost (e.g. 6 MFLOPs on ImageNet classification). Such a low cost network is highly desired on edge devices, yet usually suffers from a significant performance degradation. We handle the extremely low FLOPs based upon two design principles: (a) avoiding the reduction of network width by lowering the node connectivity, and (b) compensating for the reduction of network depth by introducing more complex non-linearity per layer. Firstly, we propose Micro-Factorized convolution to factorize both pointwise and depthwise convolutions into low rank matrices for a good tradeoff between the number of channels and input/output connectivity. Secondly, we propose a new activation function, named Dynamic Shift-Max, to improve the non-linearity via maxing out multiple dynamic fusions between an input feature map and its circular channel shift. The fusions are dynamic as their parameters are adapted to the input. Building upon Micro-Factorized convolution and dynamic Shift-Max, a family of MicroNets achieve a significant performance gain over the state-of-the-art in the low FLOP regime. For instance, MicroNet-M1 achieves 61.1 with 12 MFLOPs, outperforming MobileNetV3 by 11.3
READ FULL TEXT
Comments
There are no comments yet.