K-shot NAS: Learnable Weight-Sharing for NAS with K-shot Supernets

06/11/2021
by   Xiu Su, et al.
0

In one-shot weight sharing for NAS, the weights of each operation (at each layer) are supposed to be identical for all architectures (paths) in the supernet. However, this rules out the possibility of adjusting operation weights to cater for different paths, which limits the reliability of the evaluation results. In this paper, instead of counting on a single supernet, we introduce K-shot supernets and take their weights for each operation as a dictionary. The operation weight for each path is represented as a convex combination of items in a dictionary with a simplex code. This enables a matrix approximation of the stand-alone weight matrix with a higher rank (K>1). A simplex-net is introduced to produce architecture-customized code for each path. As a result, all paths can adaptively learn how to share weights in the K-shot supernets and acquire corresponding weights for better evaluation. K-shot supernets and simplex-net can be iteratively trained, and we further extend the search to the channel dimension. Extensive experiments on benchmark datasets validate that K-shot NAS significantly improves the evaluation accuracy of paths and thus brings in impressive performance improvements.

READ FULL TEXT

page 14

page 16

page 17

research
06/13/2022

Improve Ranking Correlation of Super-net through Training Scheme from One-shot NAS to Few-shot NAS

The algorithms of one-shot neural architecture search(NAS) have been wid...
research
10/06/2019

Improving One-shot NAS by Suppressing the Posterior Fading

There is a growing interest in automated neural architecture search (NAS...
research
01/28/2020

NAS-Bench-1Shot1: Benchmarking and Dissecting One-shot Neural Architecture Search

One-shot neural architecture search (NAS) has played a crucial role in m...
research
01/24/2023

RD-NAS: Enhancing One-shot Supernet Ranking Ability via Ranking Distillation from Zero-cost Proxies

Neural architecture search (NAS) has made tremendous progress in the aut...
research
03/25/2020

GreedyNAS: Towards Fast One-Shot NAS with Greedy Supernet

Training a supernet matters for one-shot neural architecture search (NAS...
research
11/24/2021

GreedyNASv2: Greedier Search with a Greedy Path Filter

Training a good supernet in one-shot NAS methods is difficult since the ...
research
02/10/2021

Locally Free Weight Sharing for Network Width Search

Searching for network width is an effective way to slim deep neural netw...

Please sign up or login with your details

Forgot password? Click here to reset