Toward domain generalized pruning by scoring out-of-distribution importance

10/25/2022
by   Rizhao Cai, et al.
0

Filter pruning has been widely used for compressing convolutional neural networks to reduce computation costs during the deployment stage. Recent studies have shown that filter pruning techniques can achieve lossless compression of deep neural networks, reducing redundant filters (kernels) without sacrificing accuracy performance. However, the evaluation is done when the training and testing data are from similar environmental conditions (independent and identically distributed), and how the filter pruning techniques would affect the cross-domain generalization (out-of-distribution) performance is largely ignored. We conduct extensive empirical experiments and reveal that although the intra-domain performance could be maintained after filter pruning, the cross-domain performance will decay to a large extent. As scoring a filter's importance is one of the central problems for pruning, we design the importance scoring estimation by using the variance of domain-level risks to consider the pruning risk in the unseen distribution. As such, we can remain more domain generalized filters. The experiments show that under the same pruning ratio, our method can achieve significantly better cross-domain generalization performance than the baseline filter pruning method. For the first attempt, our work sheds light on the joint problem of domain generalization and filter pruning research.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/08/2019

Meta Filter Pruning to Accelerate Deep Convolutional Neural Networks

Existing methods usually utilize pre-defined criterions, such as p-norm,...
research
06/25/2019

COP: Customized Deep Model Compression via Regularized Correlation-Based Filter-Level Pruning

Neural network compression empowers the effective yet unwieldy deep conv...
research
06/19/2017

An Entropy-based Pruning Method for CNN Compression

This paper aims to simultaneously accelerate and compress off-the-shelf ...
research
06/22/2023

Pruning for Better Domain Generalizability

In this paper, we investigate whether we could use pruning as a reliable...
research
12/19/2022

Exploring Optimal Substructure for Out-of-distribution Generalization via Feature-targeted Model Pruning

Recent studies show that even highly biased dense networks contain an un...
research
11/18/2019

Provable Filter Pruning for Efficient Neural Networks

We present a provable, sampling-based approach for generating compact Co...
research
03/02/2023

Average of Pruning: Improving Performance and Stability of Out-of-Distribution Detection

Detecting Out-of-distribution (OOD) inputs have been a critical issue fo...

Please sign up or login with your details

Forgot password? Click here to reset