DeepAI AI Chat
Log In Sign Up

An Overview of Neural Network Compression

by   James O'Neill, et al.

Overparameterized networks trained to convergence have shown impressive performance in domains such as computer vision and natural language processing. Pushing state of the art on salient tasks within these domains corresponds to these models becoming larger and more difficult for machine learning practitioners to use given the increasing memory and storage requirements, not to mention the larger carbon footprint. Thus, in recent years there has been a resurgence in model compression techniques, particularly for deep convolutional neural networks and self-attention based networks such as the Transformer. Hence, this paper provides a timely overview of both old and current compression techniques for deep neural networks, including pruning, quantization, tensor decomposition, knowledge distillation and combinations thereof. We assume a basic familiarity with deep learning architectures[%s], namely, Recurrent Neural Networks <cit.>, Convolutional Neural Networks <cit.> [%s] and Self-Attention based networks <cit.>[%s],[%s]. Most of the papers discussed are proposed in the context of at least one of these DNN architectures.


Knowledge Distillation in Vision Transformers: A Critical Review

In Natural Language Processing (NLP), Transformers have already revoluti...

Self-Attention for Raw Optical Satellite Time Series Classification

Deep learning methods have received increasing interest by the remote se...

A Survey on Deep Neural Network Compression: Challenges, Overview, and Solutions

Deep Neural Network (DNN) has gained unprecedented performance due to it...

Compression of Deep Learning Models for Text: A Survey

In recent years, the fields of natural language processing (NLP) and inf...

COMCAT: Towards Efficient Compression and Customization of Attention-Based Vision Models

Attention-based vision models, such as Vision Transformer (ViT) and its ...

DCT-Former: Efficient Self-Attention with Discrete Cosine Transform

Since their introduction the Trasformer architectures emerged as the dom...

Analyzing Compression Techniques for Computer Vision

Compressing deep networks is highly desirable for practical use-cases in...