AsymmNet: Towards ultralight convolution neural networks using asymmetrical bottlenecks

04/15/2021
by   Haojin Yang, et al.
0

Deep convolutional neural networks (CNN) have achieved astonishing results in a large variety of applications. However, using these models on mobile or embedded devices is difficult due to the limited memory and computation resources. Recently, the inverted residual block becomes the dominating solution for the architecture design of compact CNNs. In this work, we comprehensively investigated the existing design concepts, rethink the functional characteristics of two pointwise convolutions in the inverted residuals. We propose a novel design, called asymmetrical bottlenecks. Precisely, we adjust the first pointwise convolution dimension, enrich the information flow by feature reuse, and migrate saved computations to the second pointwise convolution. By doing so we can further improve the accuracy without increasing the computation overhead. The asymmetrical bottlenecks can be adopted as a drop-in replacement for the existing CNN blocks. We can thus create AsymmNet by easily stack those blocks according to proper depth and width conditions. Extensive experiments demonstrate that our proposed block design is more beneficial than the original inverted residual bottlenecks for mobile networks, especially useful for those ultralight CNNs within the regime of <220M MAdds. Code is available at https://github.com/Spark001/AsymmNet

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

09/28/2019

Training convolutional neural networks with cheap convolutions and online distillation

The large memory and computation consumption in convolutional neural net...
07/05/2020

Rethinking Bottleneck Structure for Efficient Mobile Network Design

The inverted residual block is dominating architecture design for mobile...
06/10/2019

BlockSwap: Fisher-guided Block Substitution for Network Compression

The desire to run neural networks on low-capacity edge devices has led t...
06/13/2017

SEP-Nets: Small and Effective Pattern Networks

While going deeper has been witnessed to improve the performance of conv...
01/10/2022

GhostNets on Heterogeneous Devices via Cheap Operations

Deploying convolutional neural networks (CNNs) on mobile devices is diff...
01/04/2021

DSXplore: Optimizing Convolutional Neural Networks via Sliding-Channel Convolutions

As the key advancement of the convolutional neural networks (CNNs), dept...
03/20/2020

Acoustic Scene Classification with Squeeze-Excitation Residual Networks

Acoustic scene classification (ASC) is a problem related to the field of...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.