AlphaNet: Improved Training of Supernet with Alpha-Divergence

02/16/2021
by   Dilin Wang, et al.
3

Weight-sharing neural architecture search (NAS) is an effective technique for automating efficient neural architecture design. Weight-sharing NAS builds a supernet that assembles all the architectures as its sub-networks and jointly trains the supernet with the sub-networks. The success of weight-sharing NAS heavily relies on distilling the knowledge of the supernet to the sub-networks. However, we find that the widely used distillation divergence, i.e., KL divergence, may lead to student sub-networks that over-estimate or under-estimate the uncertainty of the teacher supernet, leading to inferior performance of the sub-networks. In this work, we propose to improve the supernet training with a more generalized alpha-divergence. By adaptively selecting the alpha-divergence, we simultaneously prevent the over-estimation or under-estimation of the uncertainty of the teacher model. We apply the proposed alpha-divergence based supernet training to both slimmable neural networks and weight-sharing NAS, and demonstrate significant improvements. Specifically, our discovered model family, AlphaNet, outperforms prior-art models on a wide range of FLOPs regimes, including BigNAS, Once-for-All networks, FBNetV3, and AttentiveNAS. We achieve ImageNet top-1 accuracy of 80.0

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/16/2020

How Does Supernet Help in Neural Architecture Search?

With the success of Neural Architecture Search (NAS), weight sharing, as...
research
11/18/2020

AttentiveNAS: Improving Neural Architecture Search via Attentive Sampling

Neural architecture search (NAS) has shown great promise designing state...
research
03/18/2023

Weight-sharing Supernet for Searching Specialized Acoustic Event Classification Networks Across Device Constraints

Acoustic Event Classification (AEC) has been widely used in devices such...
research
01/29/2022

AutoDistil: Few-shot Task-agnostic Neural Architecture Search for Distilling Large Language Models

Knowledge distillation (KD) methods compress large models into smaller s...
research
03/29/2022

Generalizing Few-Shot NAS with Gradient Matching

Efficient performance estimation of architectures drawn from large searc...
research
06/08/2023

Mixture-of-Supernets: Improving Weight-Sharing Supernet Training with Architecture-Routed Mixture-of-Experts

Weight-sharing supernet has become a vital component for performance est...
research
04/17/2020

Fitting the Search Space of Weight-sharing NAS with Graph Convolutional Networks

Neural architecture search has attracted wide attentions in both academi...

Please sign up or login with your details

Forgot password? Click here to reset