Practical Network Acceleration with Tiny Sets: Hypothesis, Theory, and Algorithm

03/02/2023
by   Guo-Hua Wang, et al.
0

Due to data privacy issues, accelerating networks with tiny training sets has become a critical need in practice. Previous methods achieved promising results empirically by filter-level pruning. In this paper, we both study this problem theoretically and propose an effective algorithm aligning well with our theoretical results. First, we propose the finetune convexity hypothesis to explain why recent few-shot compression algorithms do not suffer from overfitting problems. Based on it, a theory is further established to explain these methods for the first time. Compared to naively finetuning a pruned network, feature mimicking is proved to achieve a lower variance of parameters and hence enjoys easier optimization. With our theoretical conclusions, we claim dropping blocks is a fundamentally superior few-shot compression scheme in terms of more convex optimization and a higher acceleration ratio. To choose which blocks to drop, we propose a new metric, recoverability, to effectively measure the difficulty of recovering the compressed network. Finally, we propose an algorithm named PRACTISE to accelerate networks using only tiny training sets. PRACTISE outperforms previous methods by a significant margin. For 22 percentage points on ImageNet-1k. It also works well under data-free or out-of-domain data settings. Our code is at https://github.com/DoctorKey/Practise

READ FULL TEXT
research
02/16/2022

Practical Network Acceleration with Tiny Sets

Network compression is effective in accelerating the inference of deep n...
research
08/21/2018

Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks

This paper proposed a Soft Filter Pruning (SFP) method to accelerate the...
research
12/21/2019

DBP: Discrimination Based Block-Level Pruning for Deep Model Acceleration

Neural network pruning is one of the most popular methods of acceleratin...
research
04/05/2023

Low-Shot Learning for Fictional Claim Verification

In this paper, we study the problem of claim verification in the context...
research
10/28/2022

Coverage-centric Coreset Selection for High Pruning Rates

One-shot coreset selection aims to select a subset of the training data,...
research
03/24/2021

Dynamic Slimmable Network

Current dynamic networks and dynamic pruning methods have shown their pr...
research
01/07/2022

Compressing Models with Few Samples: Mimicking then Replacing

Few-sample compression aims to compress a big redundant model into a sma...

Please sign up or login with your details

Forgot password? Click here to reset