Progressive Neural Architecture Search

by   Chenxi Liu, et al.

We propose a method for learning CNN structures that is more efficient than previous approaches: instead of using reinforcement learning (RL) or genetic algorithms (GA), we use a sequential model-based optimization (SMBO) strategy, in which we search for architectures in order of increasing complexity, while simultaneously learning a surrogate function to guide the search, similar to A* search. On the CIFAR-10 dataset, our method finds a CNN structure with the same classification accuracy (3.41 (2017), but 2 times faster (in terms of number of models evaluated). It also outperforms the GA method of Liu et al. (2017), which finds a model with worse performance (3.63 the model we learned on CIFAR also works well at the task of ImageNet classification. In particular, we match the state-of-the-art performance of 82.9


page 1

page 2

page 3

page 4


Efficient Progressive Neural Architecture Search

This paper addresses the difficult problem of finding an optimal neural ...

Graph HyperNetworks for Neural Architecture Search

Neural architecture search (NAS) automatically finds the best task-speci...

EmProx: Neural Network Performance Estimation For Neural Architecture Search

Common Neural Architecture Search methods generate large amounts of cand...

Augmenting Novelty Search with a Surrogate Model to Engineer Meta-Diversity in Ensembles of Classifiers

Using Neuroevolution combined with Novelty Search to promote behavioural...

Turning old models fashion again: Recycling classical CNN networks using the Lattice Transformation

In the early 1990s, the first signs of life of the CNN era were given: L...

Evolutionary NAS with Gene Expression Programming of Cellular Encoding

The renaissance of neural architecture search (NAS) has seen classical m...

Please sign up or login with your details

Forgot password? Click here to reset