An Overview of Neural Network Compression

06/05/2020
by   James O'Neill, et al.
27

Overparameterized networks trained to convergence have shown impressive performance in domains such as computer vision and natural language processing. Pushing state of the art on salient tasks within these domains corresponds to these models becoming larger and more difficult for machine learning practitioners to use given the increasing memory and storage requirements, not to mention the larger carbon footprint. Thus, in recent years there has been a resurgence in model compression techniques, particularly for deep convolutional neural networks and self-attention based networks such as the Transformer. Hence, this paper provides a timely overview of both old and current compression techniques for deep neural networks, including pruning, quantization, tensor decomposition, knowledge distillation and combinations thereof. We assume a basic familiarity with deep learning architectures[%s], namely, Recurrent Neural Networks <cit.>, Convolutional Neural Networks <cit.> [%s] and Self-Attention based networks <cit.>[%s],[%s]. Most of the papers discussed are proposed in the context of at least one of these DNN architectures.

READ FULL TEXT
research
02/04/2023

Knowledge Distillation in Vision Transformers: A Critical Review

In Natural Language Processing (NLP), Transformers have already revoluti...
research
10/23/2019

Self-Attention for Raw Optical Satellite Time Series Classification

Deep learning methods have received increasing interest by the remote se...
research
10/05/2020

A Survey on Deep Neural Network Compression: Challenges, Overview, and Solutions

Deep Neural Network (DNN) has gained unprecedented performance due to it...
research
08/12/2020

Compression of Deep Learning Models for Text: A Survey

In recent years, the fields of natural language processing (NLP) and inf...
research
05/26/2023

COMCAT: Towards Efficient Compression and Customization of Attention-Based Vision Models

Attention-based vision models, such as Vision Transformer (ViT) and its ...
research
03/02/2022

DCT-Former: Efficient Self-Attention with Discrete Cosine Transform

Since their introduction the Trasformer architectures emerged as the dom...
research
05/14/2023

Analyzing Compression Techniques for Computer Vision

Compressing deep networks is highly desirable for practical use-cases in...

Please sign up or login with your details

Forgot password? Click here to reset