Domain-adaptive deep network compression

09/04/2017
by   Marc Masana, et al.
0

Deep Neural Networks trained on large datasets can be easily transferred to new domains with far fewer labeled examples by a process called fine-tuning. This has the advantage that representations learned in the large source domain can be exploited on smaller target domains. However, networks designed to be optimal for the source task are often prohibitively large for the target task. In this work we address the compression of networks after domain transfer. We focus on compression algorithms based on low-rank matrix decomposition. Existing methods base compression solely on learned network weights and ignore the statistics of network activations. We show that domain transfer leads to large shifts in network activations and that it is desirable to take this into account when compressing. We demonstrate that considering activation statistics when compressing weights leads to a rank-constrained regression problem with a closed-form solution. Because our method takes into account the target domain, it can more optimally remove the redundancy in the weights. Experiments show that our Domain Adaptive Low Rank (DALR) method significantly outperforms existing low-rank compression techniques. With our approach, the fc6 layer of VGG19 can be compressed more than 4x more than using truncated SVD alone -- with only a minor or no loss in accuracy. When applied to domain-transferred networks it allows for compression down to only 5-20 parameters with only a minor drop in performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/24/2019

One time is not enough: iterative tensor decomposition for neural network compression

The low-rank tensor approximation is very promising for the compression ...
research
04/19/2018

Low Rank Structure of Learned Representations

A key feature of neural networks, particularly deep convolutional neural...
research
12/07/2021

Low-rank Tensor Decomposition for Compression of Convolutional Neural Networks Using Funnel Regularization

Tensor decomposition is one of the fundamental technique for model compr...
research
09/11/2020

Compressed Deep Networks: Goodbye SVD, Hello Robust Low-Rank Approximation

A common technique for compressing a neural network is to compute the k-...
research
09/03/2020

Compression-aware Continual Learning using Singular Value Decomposition

We propose a compression based continual task learning method that can d...
research
04/23/2018

Parameter Transfer Unit for Deep Neural Networks

Parameters in deep neural networks which are trained on large-scale data...
research
11/07/2017

Compression-aware Training of Deep Networks

In recent years, great progress has been made in a variety of applicatio...

Please sign up or login with your details

Forgot password? Click here to reset