Width Transfer: On the (In)variance of Width Optimization

04/24/2021
by   Ting-Wu Chin, et al.
3

Optimizing the channel counts for different layers of a CNN has shown great promise in improving the efficiency of CNNs at test-time. However, these methods often introduce large computational overhead (e.g., an additional 2x FLOPs of standard training). Minimizing this overhead could therefore significantly speed up training. In this work, we propose width transfer, a technique that harnesses the assumptions that the optimized widths (or channel counts) are regular across sizes and depths. We show that width transfer works well across various width optimization algorithms and networks. Specifically, we can achieve up to 320x reduction in width optimization overhead without compromising the top-1 accuracy on ImageNet, making the additional cost of width optimization negligible relative to initial training. Our findings not only suggest an efficient way to conduct width optimization but also highlight that the widths that lead to better accuracy are invariant to various aspects of network architectures and training data.

READ FULL TEXT

page 2

page 3

page 12

research
07/23/2020

PareCO: Pareto-aware Channel Optimization for Slimmable Neural Networks

Slimmable neural networks provide a flexible trade-off front between pre...
research
09/18/2018

MBS: Macroblock Scaling for CNN Model Reduction

We estimate the proper channel (width) scaling of Convolution Neural Net...
research
12/06/2020

Any-Width Networks

Despite remarkable improvements in speed and accuracy, convolutional neu...
research
01/06/2023

On the Width of the Regular n-Simplex

Consider the regular n-simplex Δ_n - it is formed by the convex-hull of ...
research
03/25/2022

Searching for Network Width with Bilaterally Coupled Network

Searching for a more compact network width recently serves as an effecti...
research
05/21/2021

BCNet: Searching for Network Width with Bilaterally Coupled Network

Searching for a more compact network width recently serves as an effecti...
research
10/10/2022

Efficient NTK using Dimensionality Reduction

Recently, neural tangent kernel (NTK) has been used to explain the dynam...

Please sign up or login with your details

Forgot password? Click here to reset