LeGR: Filter Pruning via Learned Global Ranking

04/28/2019
by   Ting-Wu Chin, et al.
0

Filter pruning has shown to be effective for learning resource-constrained convolutional neural networks (CNNs). However, prior methods for resource-constrained filter pruning have some limitations that hinder their effectiveness and efficiency. When searching for constraint-satisfying CNNs, prior methods either alter the optimization objective or adopt local search algorithms with heuristic parameterization, which are sub-optimal, especially in low-resource regime. From the efficiency perspective, prior methods are often costly to search for constraint-satisfying CNNs. In this work, we propose learned global ranking, dubbed LeGR, which improves upon prior art in the two aforementioned dimensions. Inspired by theoretical analysis, LeGR is parameterized to learn layer-wise affine transformations over the filter norms to construct a learned global ranking. With global ranking, resource-constrained filter pruning at various constraint levels can be done efficiently. We conduct extensive empirical analyses to demonstrate the effectiveness of the proposed algorithm with ResNet and MobileNetV2 networks on CIFAR-10, CIFAR-100, Bird-200, and ImageNet datasets. Code is publicly available at https://github.com/cmu-enyac/LeGR.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/01/2018

Layer-compensated Pruning for Resource-constrained Convolutional Neural Networks

Resource-efficient convolution neural networks enable not only the intel...
research
09/18/2019

Gate Decorator: Global Filter Pruning Method for Accelerating Deep Convolutional Neural Networks

Filter pruning is one of the most effective ways to accelerate and compr...
research
01/13/2022

Recursive Least Squares for Training and Pruning Convolutional Neural Networks

Convolutional neural networks (CNNs) have succeeded in many practical ap...
research
04/22/2019

Towards Learning of Filter-Level Heterogeneous Compression of Convolutional Neural Networks

Recently, deep learning has become a de facto standard in machine learni...
research
10/13/2022

Structural Pruning via Latency-Saliency Knapsack

Structural pruning can simplify network architecture and improve inferen...
research
07/01/2023

Filter Pruning for Efficient CNNs via Knowledge-driven Differential Filter Sampler

Filter pruning simultaneously accelerates the computation and reduces th...
research
09/07/2022

Interpretations Steered Network Pruning via Amortized Inferred Saliency Maps

Convolutional Neural Networks (CNNs) compression is crucial to deploying...

Please sign up or login with your details

Forgot password? Click here to reset