Dynamic Slimmable Network

03/24/2021
by   Changlin Li, et al.
1

Current dynamic networks and dynamic pruning methods have shown their promising capability in reducing theoretical computation complexity. However, dynamic sparse patterns on convolutional filters fail to achieve actual acceleration in real-world implementation, due to the extra burden of indexing, weight-copying, or zero-masking. Here, we explore a dynamic network slimming regime, named Dynamic Slimmable Network (DS-Net), which aims to achieve good hardware-efficiency via dynamically adjusting filter numbers of networks at test time with respect to different inputs, while keeping filters stored statically and contiguously in hardware to prevent the extra burden. Our DS-Net is empowered with the ability of dynamic inference by the proposed double-headed dynamic gate that comprises an attention head and a slimming head to predictively adjust network width with negligible extra computation cost. To ensure generality of each candidate architecture and the fairness of gate, we propose a disentangled two-stage training scheme inspired by one-shot NAS. In the first stage, a novel training technique for weight-sharing networks named In-place Ensemble Bootstrapping is proposed to improve the supernet training efficacy. In the second stage, Sandwich Gate Sparsification is proposed to assist the gate training by identifying easy and hard samples in an online way. Extensive experiments demonstrate our DS-Net consistently outperforms its static counterparts as well as state-of-the-art static and dynamic model compression methods by a large margin (up to 5.9 2-4x computation reduction and 1.62x real-world acceleration over ResNet-50 and MobileNet with minimal accuracy drops on ImageNet. Code release: https://github.com/changlin31/DS-Net .

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/21/2021

DS-Net++: Dynamic Weight Slicing for Efficient Inference in CNNs and Transformers

Dynamic networks have shown their promising capability in reducing theor...
research
10/17/2021

Dynamic Slimmable Denoising Network

Recently, tremendous human-designed and automatically searched neural ne...
research
08/15/2023

EQ-Net: Elastic Quantization Neural Networks

Current model quantization methods have shown their promising capability...
research
09/18/2019

Gate Decorator: Global Filter Pruning Method for Accelerating Deep Convolutional Neural Networks

Filter pruning is one of the most effective ways to accelerate and compr...
research
04/01/2023

Progressive Channel-Shrinking Network

Currently, salience-based channel pruning makes continuous breakthroughs...
research
10/09/2021

Weight Evolution: Improving Deep Neural Networks Training through Evolving Inferior Weight Values

To obtain good performance, convolutional neural networks are usually ov...
research
03/02/2023

Practical Network Acceleration with Tiny Sets: Hypothesis, Theory, and Algorithm

Due to data privacy issues, accelerating networks with tiny training set...

Please sign up or login with your details

Forgot password? Click here to reset