2PFPCE: Two-Phase Filter Pruning Based on Conditional Entropy

09/06/2018
by   Chuhan Min, et al.
0

Deep Convolutional Neural Networks (CNNs) offer remarkable performance of classifications and regressions in many high-dimensional problems and have been widely utilized in real-word cognitive applications. However, high computational cost of CNNs greatly hinder their deployment in resource-constrained applications, real-time systems and edge computing platforms. To overcome this challenge, we propose a novel filter-pruning framework, two-phase filter pruning based on conditional entropy, namely 2PFPCE, to compress the CNN models and reduce the inference time with marginal performance degradation. In our proposed method, we formulate filter pruning process as an optimization problem and propose a novel filter selection criteria measured by conditional entropy. Based on the assumption that the representation of neurons shall be evenly distributed, we also develop a maximum-entropy filter freeze technique that can reduce over fitting. Two filter pruning strategies -- global and layer-wise strategies, are compared. Our experiment result shows that combining these two strategies can achieve a higher neural network compression ratio than applying only one of them under the same accuracy drop threshold. Two-phase pruning, that is, combining both global and layer-wise strategies, achieves 10 X FLOPs reduction and 46 inference time reduction on VGG-16, with 2

READ FULL TEXT
research
06/14/2018

SCSP: Spectral Clustering Filter Pruning with Soft Self-adaption Manners

Deep Convolutional Neural Networks (CNN) has achieved significant succes...
research
10/29/2018

Demystifying Neural Network Filter Pruning

Based on filter magnitude ranking (e.g. L1 norm), conventional filter pr...
research
10/14/2020

Towards Optimal Filter Pruning with Balanced Performance and Pruning Speed

Filter pruning has drawn more attention since resource constrained platf...
research
01/17/2021

KCP: Kernel Cluster Pruning for Dense Labeling Neural Networks

Pruning has become a promising technique used to compress and accelerate...
research
11/20/2018

Stability Based Filter Pruning for Accelerating Deep CNNs

Convolutional neural networks (CNN) have achieved impressive performance...
research
08/14/2020

AntiDote: Attention-based Dynamic Optimization for Neural Network Runtime Efficiency

Convolutional Neural Networks (CNNs) achieved great cognitive performanc...
research
07/25/2018

Crossbar-aware neural network pruning

Crossbar architecture based devices have been widely adopted in neural n...

Please sign up or login with your details

Forgot password? Click here to reset