X-Pruner: eXplainable Pruning for Vision Transformers

03/08/2023
by   Lu Yu, et al.
0

Recently vision transformer models have become prominent models for a range of tasks. These models, however, usually suffer from intensive computational costs, making them impractical for deployment on edge platforms. Recent studies have proposed to prune transformers in a series of criteria, such as magnitude-based, gradient-based, and mask-based. However, previous works rely heavily on hand-crafted rules and may involve time-consuming retraining or searching. As a result, measuring weight importance in an automatic and efficient way remains an open problem. To solve this problem, we propose a novel explainable pruning framework dubbed X-Pruner, by considering the explainability of the pruning criterion. Inspired by the model explanation, we propose to assign an explainability-aware mask for each prunable unit, which measures the unit's contribution to predicting every class and is fully differentiable. Then, to preserve the most informative units, we rank all units based on the absolute sum of their explainability-aware masks and using this ranking to prune enough units to meet the target resource constraint. To verify and evaluate our method, we apply the X-Pruner on representative transformer models including the DeiT and Swin Transformer. Comprehensive simulation results demonstrate that the proposed X-Pruner outperforms the state-of-the-art black-box methods with significantly reduced computational costs and slight performance degradation.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset