Architecture Compression

02/08/2019
by   Anubhav Ashok, et al.
0

In this paper we propose a novel approach to model compression termed Architecture Compression. Instead of operating on the weight or filter space of the network like classical model compression methods, our approach operates on the architecture space. A 1-D CNN encoder-decoder is trained to learn a mapping from discrete architecture space to a continuous embedding and back. Additionally, this embedding is jointly trained to regress accuracy and parameter count in order to incorporate information about the architecture's effectiveness on the dataset. During the compression phase, we first encode the network and then perform gradient descent in continuous space to optimize a compression objective function that maximizes accuracy and minimizes parameter count. The final continuous feature is then mapped to a discrete architecture using the decoder. We demonstrate the merits of this approach on visual recognition tasks such as CIFAR-10, CIFAR-100, Fashion-MNIST and SVHN and achieve a greater than 20x compression on CIFAR-10.

READ FULL TEXT

Authors

page 6

page 10

07/20/2018

Principal Filter Analysis for Guided Network Compression

Principal Filter Analysis (PFA), is an elegant, easy to implement, yet e...
11/25/2020

Auto Graph Encoder-Decoder for Model Compression and Network Acceleration

Model compression aims to deploy deep neural networks (DNN) to mobile de...
03/13/2022

AugShuffleNet: Improve ShuffleNetV2 via More Information Communication

Based on ShuffleNetV2, we build a more powerful and efficient model fami...
06/15/2019

Model Compression by Entropy Penalized Reparameterization

We describe an end-to-end neural network weight compression approach tha...
05/10/2019

Compressing Weight-updates for Image Artifacts Removal Neural Networks

In this paper, we present a novel approach for fine-tuning a decoder-sid...
06/16/2021

TSO: Curriculum Generation using continuous optimization

The training of deep learning models poses vast challenges of including ...
08/23/2019

Pareto-optimal data compression for binary classification tasks

The goal of lossy data compression is to reduce the storage cost of a da...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.