Filter Grafting for Deep Neural Networks

01/15/2020
by   Fanxu Meng, et al.
14

This paper proposes a new learning paradigm called filter grafting, which aims to improve the representation capability of Deep Neural Networks (DNNs). The motivation is that DNNs have unimportant (invalid) filters (e.g., l1 norm close to 0). These filters limit the potential of DNNs since they are identified as having little effect on the network. While filter pruning removes these invalid filters for efficiency consideration, filter grafting re-activates them from an accuracy boosting perspective. The activation is processed by grafting external information (weights) into invalid filters. To better perform the grafting process, we develop an entropy-based criterion to measure the information of filters and an adaptive weighting strategy for balancing the grafted information among networks. After the grafting operation, the network has very few invalid filters compared with its untouched state, enpowering the model with more representation capacity. We also perform extensive experiments on the classification and recognition tasks to show the superiority of our method. For example, the grafted MobileNetV2 outperforms the non-grafted MobileNetV2 by about 7 percent on CIFAR-100 dataset.

READ FULL TEXT

page 1

page 2

page 3

page 5

page 6

page 7

page 9

page 10

research
10/19/2020

Softer Pruning, Incremental Regularization

Network pruning is widely used to compress Deep Neural Networks (DNNs). ...
research
05/28/2019

Online Filter Clustering and Pruning for Efficient Convnets

Pruning filters is an effective method for accelerating deep neural netw...
research
01/30/2020

How Does BN Increase Collapsed Neural Network Filters?

Improving sparsity of deep neural networks (DNNs) is essential for netwo...
research
07/16/2020

Deep Learning Backdoors

Intuitively, a backdoor attack against Deep Neural Networks (DNNs) is to...
research
02/18/2016

RandomOut: Using a convolutional gradient norm to rescue convolutional filters

Filters in convolutional neural networks are sensitive to their initiali...
research
01/10/2018

Net2Vec: Quantifying and Explaining how Concepts are Encoded by Filters in Deep Neural Networks

In an effort to understand the meaning of the intermediate representatio...
research
07/20/2020

Learning Sparse Filters in Deep Convolutional Neural Networks with a l1/l2 Pseudo-Norm

While deep neural networks (DNNs) have proven to be efficient for numero...

Please sign up or login with your details

Forgot password? Click here to reset