CP-ViT: Cascade Vision Transformer Pruning via Progressive Sparsity Prediction

03/09/2022
by   Zhuoran Song, et al.
0

Vision transformer (ViT) has achieved competitive accuracy on a variety of computer vision applications, but its computational cost impedes the deployment on resource-limited mobile devices. We explore the sparsity in ViT and observe that informative patches and heads are sufficient for accurate image recognition. In this paper, we propose a cascade pruning framework named CP-ViT by predicting sparsity in ViT models progressively and dynamically to reduce computational redundancy while minimizing the accuracy loss. Specifically, we define the cumulative score to reserve the informative patches and heads across the ViT model for better accuracy. We also propose the dynamic pruning ratio adjustment technique based on layer-aware attention range. CP-ViT has great general applicability for practical deployment, which can be applied to a wide range of ViT models and can achieve superior accuracy with or without fine-tuning. Extensive experiments on ImageNet, CIFAR-10, and CIFAR-100 with various pre-trained models have demonstrated the effectiveness and efficiency of CP-ViT. By progressively pruning 50% patches, our CP-ViT method reduces over 40% FLOPs while maintaining accuracy loss within 1%.

READ FULL TEXT

page 4

page 6

research
04/17/2021

Visual Transformer Pruning

Visual transformer has achieved competitive performance on a variety of ...
research
09/15/2014

Speeding-up Graphical Model Optimization via a Coarse-to-fine Cascade of Pruning Classifiers

We propose a general and versatile framework that significantly speeds-u...
research
05/27/2023

Zero-TPrune: Zero-Shot Token Pruning through Leveraging of the Attention Graph in Pre-Trained Transformers

Deployment of Transformer models on the edge is increasingly challenging...
research
04/05/2020

DSA: More Efficient Budgeted Pruning via Differentiable Sparsity Allocation

Budgeted pruning is the problem of pruning under resource constraints. I...
research
03/23/2023

CP^3: Channel Pruning Plug-in for Point-based Networks

Channel pruning can effectively reduce both computational cost and memor...
research
10/22/2020

An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale

While the Transformer architecture has become the de-facto standard for ...
research
03/08/2021

Multiple Instance Captioning: Learning Representations from Histopathology Textbooks and Articles

We present ARCH, a computational pathology (CP) multiple instance captio...

Please sign up or login with your details

Forgot password? Click here to reset