Knowledge Distillation in Vision Transformers: A Critical Review

02/04/2023
by   Gousia Habib, et al.
0

In Natural Language Processing (NLP), Transformers have already revolutionized the field by utilizing an attention-based encoder-decoder model. Recently, some pioneering works have employed Transformer-like architectures in Computer Vision (CV) and they have reported outstanding performance of these architectures in tasks such as image classification, object detection, and semantic segmentation. Vision Transformers (ViTs) have demonstrated impressive performance improvements over Convolutional Neural Networks (CNNs) due to their competitive modelling capabilities. However, these architectures demand massive computational resources which makes these models difficult to be deployed in the resource-constrained applications. Many solutions have been developed to combat this issue, such as compressive transformers and compression functions such as dilated convolution, min-max pooling, 1D convolution, etc. Model compression has recently attracted considerable research attention as a potential remedy. A number of model compression methods have been proposed in the literature such as weight quantization, weight multiplexing, pruning and Knowledge Distillation (KD). However, techniques like weight quantization, pruning and weight multiplexing typically involve complex pipelines for performing the compression. KD has been found to be a simple and much effective model compression technique that allows a relatively simple model to perform tasks almost as accurately as a complex model. This paper discusses various approaches based upon KD for effective compression of ViT models. The paper elucidates the role played by KD in reducing the computational and memory requirements of these models. The paper also presents the various challenges faced by ViTs that are yet to be resolved.

READ FULL TEXT

page 1

page 10

page 14

page 20

page 21

page 22

page 23

page 24

research
11/18/2022

Vision Transformers in Medical Imaging: A Review

Transformer, a model comprising attention-based encoder-decoder architec...
research
06/05/2020

An Overview of Neural Network Compression

Overparameterized networks trained to convergence have shown impressive ...
research
07/20/2022

Model Compression for Resource-Constrained Mobile Robots

The number of mobile robots with constrained computing resources that ne...
research
09/05/2023

A survey on efficient vision transformers: algorithms, techniques, and performance benchmarking

Vision Transformer (ViT) architectures are becoming increasingly popular...
research
08/12/2020

Compression of Deep Learning Models for Text: A Survey

In recent years, the fields of natural language processing (NLP) and inf...
research
05/14/2023

Analyzing Compression Techniques for Computer Vision

Compressing deep networks is highly desirable for practical use-cases in...
research
10/01/2022

EAPruning: Evolutionary Pruning for Vision Transformers and CNNs

Structured pruning greatly eases the deployment of large neural networks...

Please sign up or login with your details

Forgot password? Click here to reset