CPOT: Channel Pruning via Optimal Transport

05/21/2020
by   Yucong Shen, et al.
5

Recent advances in deep neural networks (DNNs) lead to tremendously growing network parameters, making the deployments of DNNs on platforms with limited resources extremely difficult. Therefore, various pruning methods have been developed to compress the deep network architectures and accelerate the inference process. Most of the existing channel pruning methods discard the less important filters according to well-designed filter ranking criteria. However, due to the limited interpretability of deep learning models, designing an appropriate ranking criterion to distinguish redundant filters is difficult. To address such a challenging issue, we propose a new technique of Channel Pruning via Optimal Transport, dubbed CPOT. Specifically, we locate the Wasserstein barycenter for channels of each layer in the deep models, which is the mean of a set of probability distributions under the optimal transport metric. Then, we prune the redundant information located by Wasserstein barycenters. At last, we empirically demonstrate that, for classification tasks, CPOT outperforms the state-of-the-art methods on pruning ResNet-20, ResNet-32, ResNet-56, and ResNet-110. Furthermore, we show that the proposed CPOT technique is good at compressing the StarGAN models by pruning in the more difficult case of image-to-image translation tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/30/2023

Extremal Domain Translation with Neural Optimal Transport

We propose the extremal transport (ET) which is a mathematical formaliza...
research
05/22/2020

PruneNet: Channel Pruning via Global Importance

Channel pruning is one of the predominant approaches for accelerating de...
research
02/22/2022

HRel: Filter Pruning based on High Relevance between Activation Maps and Class Labels

This paper proposes an Information Bottleneck theory based filter prunin...
research
02/18/2021

A Mathematical Principle of Deep Learning: Learn the Geodesic Curve in the Wasserstein Space

Recent studies revealed the mathematical connection of deep neural netwo...
research
03/30/2020

How Not to Give a FLOP: Combining Regularization and Pruning for Efficient Inference

The challenge of speeding up deep learning models during the deployment ...
research
11/14/2022

Pruning Very Deep Neural Network Channels for Efficient Inference

In this paper, we introduce a new channel pruning method to accelerate v...
research
07/19/2017

Channel Pruning for Accelerating Very Deep Neural Networks

In this paper, we introduce a new channel pruning method to accelerate v...

Please sign up or login with your details

Forgot password? Click here to reset