Network Pruning via Transformable Architecture Search

05/23/2019
by   Xuanyi Dong, et al.
0

Network pruning reduces the computation costs of an over-parameterized network without performance damage. Prevailing pruning algorithms pre-define the width and depth of the pruned networks, and then transfer parameters from the unpruned network to pruned networks. To break the structure limitation of the pruned networks, we propose to apply neural architecture search to search directly for a network with flexible channel and layer sizes. The number of the channels/layers is learned by minimizing the loss of the pruned networks. The feature map of the pruned network is an aggregation of K feature map fragments (generated by K networks of different sizes), which are sampled based on the probability distribution.The loss can be back-propagated not only to the network weights, but also to the parameterized distribution to explicitly tune the size of the channels/layers. Specifically, we apply channel-wise interpolation to keep the feature map with different channel sizes aligned in the aggregation procedure. The maximum probability for the size in each distribution serves as the width and depth of the pruned network, whose parameters are learned by knowledge transfer, e.g., knowledge distillation, from the original networks. Experiments on CIFAR-10, CIFAR-100 and ImageNet demonstrate the effectiveness of our new perspective of network pruning compared to traditional network pruning algorithms. Various searching and knowledge transfer approaches are conducted to show the effectiveness of the two components.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/19/2020

Knapsack Pruning with Inner Distillation

Neural network pruning reduces the computational cost of an over-paramet...
research
06/02/2022

Pruning-as-Search: Efficient Neural Architecture Search via Channel Pruning and Structural Reparameterization

Neural architecture search (NAS) and network pruning are widely studied ...
research
11/04/2020

Channel Planting for Deep Neural Networks using Knowledge Distillation

In recent years, deeper and wider neural networks have shown excellent p...
research
10/13/2021

CONetV2: Efficient Auto-Channel Size Optimization for CNNs

Neural Architecture Search (NAS) has been pivotal in finding optimal net...
research
03/25/2022

Searching for Network Width with Bilaterally Coupled Network

Searching for a more compact network width recently serves as an effecti...
research
05/28/2020

A Feature-map Discriminant Perspective for Pruning Deep Neural Networks

Network pruning has become the de facto tool to accelerate deep neural n...
research
11/12/2019

CALPA-NET: Channel-pruning-assisted Deep Residual Network for Steganalysis of Digital Images

Over the past few years, detection performance improvements of deep-lear...

Please sign up or login with your details

Forgot password? Click here to reset