PareCO: Pareto-aware Channel Optimization for Slimmable Neural Networks

07/23/2020
by   Ting-Wu Chin, et al.
0

Slimmable neural networks provide a flexible trade-off front between prediction error and computational cost (such as the number of floating-point operations or FLOPs) with the same storage cost as a single model, have been proposed recently for resource-constrained settings such as mobile devices. However, current slimmable neural networks use a single width-multiplier for all the layers to arrive at sub-networks with different performance profiles, which neglects that different layers affect the network's prediction accuracy differently and have different FLOP requirements. Hence, developing a principled approach for deciding width-multipliers across different layers could potentially improve the performance of slimmable networks. To allow for heterogeneous width-multipliers across different layers, we formulate the problem of optimizing slimmable networks from a multi-objective optimization lens, which leads to a novel algorithm for optimizing both the shared weights and the width-multipliers for the sub-networks. We perform extensive empirical analysis with 14 network and dataset combinations and find that less over-parameterized networks benefit more from a joint channel and weight optimization than extremely over-parameterized networks. Quantitatively, improvements up to 1.7% and 1% in top-1 accuracy on the ImageNet dataset can be attained for MobileNetV2 and MobileNetV3, respectively. Our results highlight the potential of optimizing the channel counts for different layers jointly with the weights and demonstrate the power of such techniques for slimmable networks.

READ FULL TEXT
research
04/24/2021

Width Transfer: On the (In)variance of Width Optimization

Optimizing the channel counts for different layers of a CNN has shown gr...
research
02/05/2019

DVOLVER: Efficient Pareto-Optimal Neural Network Architecture Search

Automatic search of neural network architectures is a standing research ...
research
10/15/2021

Joint Channel and Weight Pruning for Model Acceleration on Moblie Devices

For practical deep neural network design on mobile devices, it is essent...
research
09/01/2021

Architecture Aware Latency Constrained Sparse Neural Networks

Acceleration of deep neural networks to meet a specific latency constrai...
research
12/06/2020

Any-Width Networks

Despite remarkable improvements in speed and accuracy, convolutional neu...
research
07/26/2021

Dissecting FLOPs along input dimensions for GreenAI cost estimations

The term GreenAI refers to a novel approach to Deep Learning, that is mo...
research
06/02/2023

MLP-Mixer as a Wide and Sparse MLP

Multi-layer perceptron (MLP) is a fundamental component of deep learning...

Please sign up or login with your details

Forgot password? Click here to reset