Generalizing Few-Shot NAS with Gradient Matching

03/29/2022
by   Shoukang Hu, et al.
0

Efficient performance estimation of architectures drawn from large search spaces is essential to Neural Architecture Search. One-Shot methods tackle this challenge by training one supernet to approximate the performance of every architecture in the search space via weight-sharing, thereby drastically reducing the search cost. However, due to coupled optimization between child architectures caused by weight-sharing, One-Shot supernet's performance estimation could be inaccurate, leading to degraded search outcomes. To address this issue, Few-Shot NAS reduces the level of weight-sharing by splitting the One-Shot supernet into multiple separated sub-supernets via edge-wise (layer-wise) exhaustive partitioning. Since each partition of the supernet is not equally important, it necessitates the design of a more effective splitting criterion. In this work, we propose a gradient matching score (GM) that leverages gradient information at the shared weight for making informed splitting decisions. Intuitively, gradients from different child models can be used to identify whether they agree on how to update the shared modules, and subsequently to decide if they should share the same weight. Compared with exhaustive partitioning, the proposed criterion significantly reduces the branching factor per edge. This allows us to split more edges (layers) for a given budget, resulting in substantially improved performance as NAS search spaces usually include dozens of edges (layers). Extensive empirical evaluations of the proposed method on a wide range of search spaces (NASBench-201, DARTS, MobileNet Space), datasets (cifar10, cifar100, ImageNet) and search algorithms (DARTS, SNAS, RSPS, ProxylessNAS, OFA) demonstrate that it significantly outperforms its Few-Shot counterparts while surpassing previous comparable methods in terms of the accuracy of derived architectures.

READ FULL TEXT
research
06/11/2020

Few-shot Neural Architecture Search

To improve the search efficiency for Neural Architecture Search (NAS), O...
research
03/29/2020

Disturbance-immune Weight Sharing for Neural Architecture Search

Neural architecture search (NAS) has gained increasing attention in the ...
research
11/23/2022

NAS-LID: Efficient Neural Architecture Search with Local Intrinsic Dimension

One-shot neural architecture search (NAS) substantially improves the sea...
research
02/11/2020

To Share or Not To Share: A Comprehensive Appraisal of Weight-Sharing

Weight-sharing (WS) has recently emerged as a paradigm to accelerate the...
research
02/16/2021

AlphaNet: Improved Training of Supernet with Alpha-Divergence

Weight-sharing neural architecture search (NAS) is an effective techniqu...
research
10/10/2021

ZARTS: On Zero-order Optimization for Neural Architecture Search

Differentiable architecture search (DARTS) has been a popular one-shot p...
research
08/29/2021

Analyzing and Mitigating Interference in Neural Architecture Search

Weight sharing has become the de facto approach to reduce the training c...

Please sign up or login with your details

Forgot password? Click here to reset