DeepAI AI Chat
Log In Sign Up

ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression

07/20/2017
by   Jian-Hao Luo, et al.
Nanjing University
Shanghai Jiao Tong University
0

We propose an efficient and unified framework, namely ThiNet, to simultaneously accelerate and compress CNN models in both training and inference stages. We focus on the filter level pruning, i.e., the whole filter would be discarded if it is less important. Our method does not change the original network structure, thus it can be perfectly supported by any off-the-shelf deep learning libraries. We formally establish filter pruning as an optimization problem, and reveal that we need to prune filters based on statistics information computed from its next layer, not the current layer, which differentiates ThiNet from existing methods. Experimental results demonstrate the effectiveness of this strategy, which has advanced the state-of-the-art. We also show the performance of ThiNet on ILSVRC-12 benchmark. ThiNet achieves 3.31× FLOPs reduction and 16.63× compression on VGG-16, with only 0.52% top-5 accuracy drop. Similar experiments with ResNet-50 reveal that even for a compact network, ThiNet can also reduce more than half of the parameters and FLOPs, at the cost of roughly 1% top-5 accuracy drop. Moreover, the original VGG-16 model can be further pruned into a very small model with only 5.05MB model size, preserving AlexNet level accuracy but showing much stronger generalization ability.

READ FULL TEXT

page 1

page 2

page 3

page 4

06/19/2017

An Entropy-based Pruning Method for CNN Compression

This paper aims to simultaneously accelerate and compress off-the-shelf ...
06/07/2022

Neural Network Compression via Effective Filter Analysis and Hierarchical Pruning

Network compression is crucial to making the deep networks to be more ef...
11/20/2018

Multi-layer Pruning Framework for Compressing Single Shot MultiBox Detector

We propose a framework for compressing state-of-the-art Single Shot Mult...
01/15/2020

A "Network Pruning Network" Approach to Deep Model Compression

We present a filter pruning approach for deep model compression, using a...
11/03/2022

Self Similarity Matrix based CNN Filter Pruning

In recent years, most of the deep learning solutions are targeted to be ...
12/11/2018

A Main/Subsidiary Network Framework for Simplifying Binary Neural Network

To reduce memory footprint and run-time latency, techniques such as neur...
05/11/2019

Play and Prune: Adaptive Filter Pruning for Deep Model Compression

While convolutional neural networks (CNN) have achieved impressive perfo...

Code Repositories

ThiNet

caffe model of ICCV'17 paper - ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression https://arxiv.org/abs/1707.06342


view repo