GreedyNASv2: Greedier Search with a Greedy Path Filter

11/24/2021
by   Tao Huang, et al.
0

Training a good supernet in one-shot NAS methods is difficult since the search space is usually considerably huge (e.g., 13^21). In order to enhance the supernet's evaluation ability, one greedy strategy is to sample good paths, and let the supernet lean towards the good ones and ease its evaluation burden as a result. However, in practice the search can be still quite inefficient since the identification of good paths is not accurate enough and sampled paths still scatter around the whole search space. In this paper, we leverage an explicit path filter to capture the characteristics of paths and directly filter those weak ones, so that the search can be thus implemented on the shrunk space more greedily and efficiently. Concretely, based on the fact that good paths are much less than the weak ones in the space, we argue that the label of "weak paths" will be more confident and reliable than that of “good paths" in multi-path sampling. In this way, we thus cast the training of path filter in the positive and unlabeled (PU) learning paradigm, and also encourage a path embedding as better path/operation representation to enhance the identification capacity of the learned filter. By dint of this embedding, we can further shrink the search space by aggregating similar operations with similar embeddings, and the search can be more efficient and accurate. Extensive experiments validate the effectiveness of the proposed method GreedyNASv2. For example, our obtained GreedyNASv2-L achieves 81.1% Top-1 accuracy on ImageNet dataset, significantly outperforming the ResNet-50 strong baselines.

READ FULL TEXT

page 8

page 15

research
03/25/2020

GreedyNAS: Towards Fast One-Shot NAS with Greedy Supernet

Training a supernet matters for one-shot neural architecture search (NAS...
research
01/16/2020

MixPath: A Unified Approach for One-shot Neural Architecture Search

The expressiveness of search space is a key concern in neural architectu...
research
03/23/2023

DetOFA: Efficient Training of Once-for-All Networks for Object Detection by Using Pre-trained Supernet and Path Filter

We address the challenge of training a large supernet for the object det...
research
03/22/2021

Prioritized Architecture Sampling with Monto-Carlo Tree Search

One-shot neural architecture search (NAS) methods significantly reduce t...
research
06/11/2021

K-shot NAS: Learnable Weight-Sharing for NAS with K-shot Supernets

In one-shot weight sharing for NAS, the weights of each operation (at ea...
research
12/10/2019

Efficient Differentiable Neural Architecture Search with Meta Kernels

The searching procedure of neural architecture search (NAS) is notorious...

Please sign up or login with your details

Forgot password? Click here to reset