Generalization Guarantees for Neural Architecture Search with Train-Validation Split

04/29/2021
by   Samet Oymak, et al.
0

Neural Architecture Search (NAS) is a popular method for automatically designing optimized architectures for high-performance deep learning. In this approach, it is common to use bilevel optimization where one optimizes the model weights over the training data (lower-level problem) and various hyperparameters such as the configuration of the architecture over the validation data (upper-level problem). This paper explores the statistical aspects of such problems with train-validation splits. In practice, the lower-level problem is often overparameterized and can easily achieve zero loss. Thus, a-priori it seems impossible to distinguish the right hyperparameters based on training loss alone which motivates a better understanding of the role of train-validation split. To this aim this work establishes the following results. (1) We show that refined properties of the validation loss such as risk and hyper-gradients are indicative of those of the true test loss. This reveals that the upper-level problem helps select the most generalizable model and prevent overfitting with a near-minimal validation sample size. Importantly, this is established for continuous search spaces which are highly relevant for popular differentiable search schemes. (2) We establish generalization bounds for NAS problems with an emphasis on an activation search problem. When optimized with gradient-descent, we show that the train-validation procedure returns the best (model, architecture) pair even if all architectures can perfectly fit the training data to achieve zero error. (3) Finally, we highlight rigorous connections between NAS, multiple kernel learning, and low-rank matrix learning. The latter leads to novel algorithmic insights where the solution of the upper problem can be accurately learned via efficient spectral methods to achieve near-minimal risk.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/05/2021

NAS-OoD: Neural Architecture Search for Out-of-Distribution Generalization

Recent advances on Out-of-Distribution (OoD) generalization reveal the r...
research
09/15/2022

Generalization Properties of NAS under Activation and Skip Connection Search

Neural Architecture Search (NAS) has fostered the automatic discovery of...
research
02/09/2023

Light and Accurate: Neural Architecture Search via Two Constant Shared Weights Initialisations

In recent years, zero-cost proxies are gaining ground in neural architec...
research
10/13/2020

ISTA-NAS: Efficient and Consistent Neural Architecture Search by Sparse Coding

Neural architecture search (NAS) aims to produce the optimal sparse solu...
research
09/25/2021

L^2NAS: Learning to Optimize Neural Architectures via Continuous-Action Reinforcement Learning

Neural architecture search (NAS) has achieved remarkable results in deep...
research
02/17/2021

Muddling Labels for Regularization, a novel approach to generalization

Generalization is a central problem in Machine Learning. Indeed most pre...
research
09/20/2019

Understanding and Robustifying Differentiable Architecture Search

Differentiable Architecture Search (DARTS) has attracted a lot of attent...

Please sign up or login with your details

Forgot password? Click here to reset