DeepAI AI Chat
Log In Sign Up

Learning Transferable Architectures for Scalable Image Recognition

by   Barret Zoph, et al.

Developing neural network image classification models often requires significant architecture engineering. In this paper, we attempt to automate this engineering process by learning the model architectures directly on the dataset of interest. As this approach is expensive when the dataset is large, we propose to search for an architectural building block on a small dataset and then transfer the block to a larger dataset. Our key contribution is the design of a new search space which enables transferability. In our experiments, we search for the best convolutional layer (or "cell") on the CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking together more copies of this cell, each with their own parameters. Although the cell is not searched for directly on ImageNet, an architecture constructed from the best cell achieves, among the published works, state-of-the-art accuracy of 82.7 and 96.2 the best human-invented architectures while having 9 billion fewer FLOPS -- a reduction of 28 model. When evaluated at different levels of computational cost, accuracies of our models exceed those of the state-of-the-art human-designed models. For instance, a smaller network constructed from the best cell also achieves 74 top-1 accuracy, which is 3.1 models for mobile platforms. On CIFAR-10, an architecture constructed from the best cell achieves 2.4 the image features learned from image classification can also be transferred to other computer vision problems. On the task of object detection, the learned features used with the Faster-RCNN framework surpass state-of-the-art by 4.0 achieving 43.1


page 3

page 5

page 14


Neural Architecture Search with Reinforcement Learning

Neural networks are powerful and flexible models that work well for many...

Deep CNNs for Peripheral Blood Cell Classification

The application of machine learning techniques to the medical domain is ...

BlockQNN: Efficient Block-wise Neural Network Architecture Generation

Convolutional neural networks have gained a remarkable success in comput...

Deep Learning of Cell Classification using Microscope Images of Intracellular Microtubule Networks

Microtubule networks (MTs) are a component of a cell that may indicate t...

Maximum margin learning of t-SPNs for cell classification with filtered input

An algorithm based on a deep probabilistic architecture referred to as a...

Efficient Image Dataset Classification Difficulty Estimation for Predicting Deep-Learning Accuracy

In the deep-learning community new algorithms are published at an incred...

BNAS v2: Learning Architectures for Binary Networks with Empirical Improvements

Backbone architectures of most binary networks are well-known floating p...

Code Repositories


Keras implementation of NASNet-A

view repo