DS-Net++: Dynamic Weight Slicing for Efficient Inference in CNNs and Transformers

09/21/2021
by   Changlin Li, et al.
1

Dynamic networks have shown their promising capability in reducing theoretical computation complexity by adapting their architectures to the input during inference. However, their practical runtime usually lags behind the theoretical acceleration due to inefficient sparsity. Here, we explore a hardware-efficient dynamic inference regime, named dynamic weight slicing, which adaptively slice a part of network parameters for inputs with diverse difficulty levels, while keeping parameters stored statically and contiguously in hardware to prevent the extra burden of sparse computation. Based on this scheme, we present dynamic slimmable network (DS-Net) and dynamic slice-able network (DS-Net++) by input-dependently adjusting filter numbers of CNNs and multiple dimensions in both CNNs and transformers, respectively. To ensure sub-network generality and routing fairness, we propose a disentangled two-stage optimization scheme with training techniques such as in-place bootstrapping (IB), multi-view consistency (MvCo) and sandwich gate sparsification (SGS) to train supernet and gate separately. Extensive experiments on 4 datasets and 3 different network architectures demonstrate our method consistently outperforms state-of-the-art static and dynamic model compression methods by a large margin (up to 6.6 achieves 2-4x computation reduction and 1.62x real-world acceleration over MobileNet, ResNet-50 and Vision Transformer, with minimal accuracy drops (0.1-0.3

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/24/2021

Dynamic Slimmable Network

Current dynamic networks and dynamic pruning methods have shown their pr...
research
10/17/2021

Dynamic Slimmable Denoising Network

Recently, tremendous human-designed and automatically searched neural ne...
research
12/14/2021

AdaViT: Adaptive Tokens for Efficient Vision Transformer

We introduce AdaViT, a method that adaptively adjusts the inference cost...
research
08/12/2022

An Algorithm-Hardware Co-Optimized Framework for Accelerating N:M Sparse Transformers

The Transformer has been an indispensable staple in deep learning. Howev...
research
05/07/2021

SpeechMoE: Scaling to Large Acoustic Models with Dynamic Routing Mixture of Experts

Recently, Mixture of Experts (MoE) based Transformer has shown promising...
research
08/14/2023

SCSC: Spatial Cross-scale Convolution Module to Strengthen both CNNs and Transformers

This paper presents a module, Spatial Cross-scale Convolution (SCSC), wh...

Please sign up or login with your details

Forgot password? Click here to reset