Pi-NAS: Improving Neural Architecture Search by Reducing Supernet Training Consistency Shift

by   Jiefeng Peng, et al.

Recently proposed neural architecture search (NAS) methods co-train billions of architectures in a supernet and estimate their potential accuracy using the network weights detached from the supernet. However, the ranking correlation between the architectures' predicted accuracy and their actual capability is incorrect, which causes the existing NAS methods' dilemma. We attribute this ranking correlation problem to the supernet training consistency shift, including feature shift and parameter shift. Feature shift is identified as dynamic input distributions of a hidden layer due to random path sampling. The input distribution dynamic affects the loss descent and finally affects architecture ranking. Parameter shift is identified as contradictory parameter updates for a shared layer lay in different paths in different training steps. The rapidly-changing parameter could not preserve architecture ranking. We address these two shifts simultaneously using a nontrivial supernet-Pi model, called Pi-NAS. Specifically, we employ a supernet-Pi model that contains cross-path learning to reduce the feature consistency shift between different paths. Meanwhile, we adopt a novel nontrivial mean teacher containing negative samples to overcome parameter shift and model collision. Furthermore, our Pi-NAS runs in an unsupervised manner, which can search for more transferable architectures. Extensive experiments on ImageNet and a wide range of downstream tasks (e.g., COCO 2017, ADE20K, and Cityscapes) demonstrate the effectiveness and universality of our Pi-NAS compared to supervised NAS. See Codes: https://github.com/Ernie1/Pi-NAS.


page 1

page 4

page 8

page 13


Prior-Guided One-shot Neural Architecture Search

Neural architecture search methods seek optimal candidates with efficien...

Natural Attribute-based Shift Detection

Despite the impressive performance of deep networks in vision, language,...

NAS-LID: Efficient Neural Architecture Search with Local Intrinsic Dimension

One-shot neural architecture search (NAS) substantially improves the sea...

PA DA: Jointly Sampling PAth and DAta for Consistent NAS

Based on the weight-sharing mechanism, one-shot NAS methods train a supe...

Understanding and Improving One-shot Neural Architecture Optimization

The ability of accurately ranking candidate architectures is the key to ...

Towards Regression-Free Neural Networks for Diverse Compute Platforms

With the shift towards on-device deep learning, ensuring a consistent be...

Sandwich Batch Normalization

We present Sandwich Batch Normalization (SaBN), an embarrassingly easy i...

Please sign up or login with your details

Forgot password? Click here to reset