Analyzing Compression Techniques for Computer Vision

05/14/2023
by   Maniratnam Mandal, et al.
0

Compressing deep networks is highly desirable for practical use-cases in computer vision applications. Several techniques have been explored in the literature, and research has been done in finding efficient strategies for combining them. For this project, we aimed to explore three different basic compression techniques - knowledge distillation, pruning, and quantization for small-scale recognition tasks. Along with the basic methods, we also test the efficacy of combining them in a sequential manner. We analyze them using MNIST and CIFAR-10 datasets and present the results along with few observations inferred from them.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/20/2022

Model Compression for Resource-Constrained Mobile Robots

The number of mobile robots with constrained computing resources that ne...
research
08/20/2022

Combining Compressions for Multiplicative Size Scaling on Natural Language Tasks

Quantization, knowledge distillation, and magnitude pruning are among th...
research
12/05/2020

Parallel Blockwise Knowledge Distillation for Deep Neural Network Compression

Deep neural networks (DNNs) have been extremely successful in solving ma...
research
02/04/2023

Knowledge Distillation in Vision Transformers: A Critical Review

In Natural Language Processing (NLP), Transformers have already revoluti...
research
05/14/2023

Improving Defensive Distillation using Teacher Assistant

Adversarial attacks pose a significant threat to the security and safety...
research
06/05/2020

An Overview of Neural Network Compression

Overparameterized networks trained to convergence have shown impressive ...
research
02/23/2022

Are All Linear Regions Created Equal?

The number of linear regions has been studied as a proxy of complexity f...

Please sign up or login with your details

Forgot password? Click here to reset