Regularized Evolution for Image Classifier Architecture Search

by   Esteban Real, et al.

The effort devoted to hand-crafting image classifiers has motivated the use of architecture search to discover them automatically. Reinforcement learning and evolution have both shown promise for this purpose. This study introduces a regularized version of a popular asynchronous evolutionary algorithm. We rigorously compare it to the non-regularized form and to a highly-successful reinforcement learning baseline. Using the same hardware, compute effort and neural network training code, we conduct repeated experiments side-by-side, exploring different datasets, search spaces and scales. We show regularized evolution consistently produces models with similar or higher accuracy, across a variety of contexts without need for re-tuning parameters. In addition, regularized evolution exhibits considerably better performance than reinforcement learning at early search stages, suggesting it may be the better choice when fewer compute resources are available. This constitutes the first controlled comparison of the two search algorithms in this context. Finally, we present new architectures discovered with regularized evolution that we nickname AmoebaNets. These models set a new state of the art for CIFAR-10 (mean test error = 2.13 parameters), and reach the current state of the art for ImageNet (top-5 accuracy = 96.2


page 1

page 2

page 3

page 4


DARTS: Differentiable Architecture Search

This paper addresses the scalability challenge of architecture search by...

TAPAS: Train-less Accuracy Predictor for Architecture Search

In recent years an increasing number of researchers and practitioners ha...

Resource Constrained Neural Network Architecture Search

The design of neural network architectures is frequently either based on...

IRLAS: Inverse Reinforcement Learning for Architecture Search

In this paper, we propose an inverse reinforcement learning method for a...

Neural Architecture Search using Covariance Matrix Adaptation Evolution Strategy

Evolution-based neural architecture search requires high computational r...

Large-Scale Evolution of Image Classifiers

Neural networks have proven effective at solving difficult problems but ...

Modeling the Evolution of Retina Neural Network

Vital to primary visual processing, retinal circuitry shows many similar...


Until recently, most state-of-the-art image classifier architectures have been manually designed by human experts [27, 44, 20, 24, 23]. To speed up the process, researchers have looked into automated methods [2, 52, 31, 35, 46, 42, 28, 34]. These methods are now collectively known as architecture-search algorithms. A traditional approach is neuro-evolution of topologies [32, 1, 41]. Improved hardware now allows scaling up evolution to produce high-quality image classifiers [35, 46, 29]

. Yet, the architectures produced by evolutionary algorithms / genetic programming have not reached the accuracy of those directly designed by human experts. Here we evolve image classifiers that surpass hand-designs.

To do this, we make two additions to the standard evolutionary process. First, we propose a change to the well-established tournament selection evolutionary algorithm [19] that we refer to as aging evolution or regularized evolution. Whereas in tournament selection, the best genotypes (architectures) are kept, we propose to associate each genotype with an age, and bias the tournament selection to choose the younger genotypes. We will show that this change turns out to make a difference. The connection to regularization will be clarified in the Discussion section.

Second, we implement the simplest set of mutations that would allow evolving in the NASNet search space [53]

. This search space associates convolutional neural network architectures with small directed graphs in which vertices represent hidden states and labeled edges represent common network operations (such as convolutions or pooling layers). Our mutation rules only alter architectures by randomly reconnecting the origin of edges to different vertices and by randomly relabeling the edges, covering the full search space.

Searching in the NASNet space allows a controlled comparison between evolution and the original method for which it was designed, reinforcement learning (RL). Thus, this paper presents the first comparative case study of architecture-search algorithms for the image classification task. Within this case study, we will demonstrate that evolution can attain similar results with a simpler method, as will be shown in the Discussion section. In particular, we will highlight that in all our experiments evolution searched faster than RL and random search, especially at the earlier stages, which is important when experiments cannot be run for long times due to compute resource limitations.

Despite its simplicity, our approach works well in our benchmark against RL. It also evolved a high-quality model, which we name AmoebaNet-A. This model is competitive with the best image classifiers obtained by any other algorithm today at similar sizes (82.8% top-1 / 96.1% top-5 ImageNet accuracy). When scaled up, it sets a new state-of-the-art accuracy (83.9% top-1 / 96.6% top-5 ImageNet accuracy).

Related Work

Review papers provide informative surveys of earlier [48, 18] and more recent [15] literature on image classifier architecture search, including successful RL studies [52, 2, 53, 28, 51, 6] and evolutionary studies like those mentioned in the Introduction. Other methods have also been applied: cascade-correlation [16], boosting [10], hill-climbing [14], MCTS [33], SMBO [30, 28], and random search [4], and grid search [49]. Some methods even forewent the idea of independent architectures [37]. There is much architecture-search work beyond image classification too, but that is outside our scope.

Even though some methods stand out due to their efficiency [42, 34], many approaches use large amounts of resources. Several recent papers reduced the compute cost through progressive-complexity search stages [28], hypernets [5], accuracy prediction [3, 25, 13], warm-starting and ensembling [17], parallelization, reward shaping and early stopping [51] or Net2Net transformations [6]. Most of these methods could in principle be applied to evolution too, but this is beyond the scope of this paper.

A popular approach to evolution has been through generational algorithms, e.g. NEAT [41]. All models in the population must finish training before the next generation is computed. Generational evolution becomes inefficient in a distributed environment where a different machine is used to train each model: machines that train faster models finish earlier and must wait idle until all machines are ready. Real-time algorithms address this issue, e.g. rtNEAT [40] and tournament selection [19]. Unlike the generational algorithms, however, these discard models according to their performance or do not discard them at all, resulting in models that remain alive in the population for a long time—even for the whole experiment. We will present evidence that the finite lifetimes of aging evolution can give better results than direct tournament selection, while retaining its efficiency.

An existing paper [22] uses a concept of age but in a very different way than we do. In that paper, age is assigned to genes to divide a constant-size population into groups called age-layers. Each layer contains individuals with genes of similar ages. Only after the genes have survived a certain age-gap, they can make it to the next layer. The goal is to restrict competition (the newly introduced genes cannot be immediately out-competed by highly-selected older ones). Their algorithm requires the introduction of two additional meta-parameters (size of the age-gap and number of age-layers). In contrast, in our algorithm, an age is assigned to the individuals (not the genes) and is only used to track which is the oldest individual in the population. This permits removing such oldest individual at each cycle (keeping a constant population size). Our approach, therefore, is in line with our goal of keeping the method as simple as possible. In particular, our method remains similar to nature (where the young are less likely to die than the very old) and it requires no additional meta-parameters.


This section contains a readable description of the methods. The Methods Details section gives additional information.

Search Space

All experiments use the NASNet search space [53]. This is a space of image classifiers, all of which have the fixed outer structure indicated in Figure 1 (left): a feed-forward stack of Inception-like modules called cells. Each cell receives a direct input from the previous cell (as depicted) and a skip input from the cell before it (Figure 1, middle). The cells in the stack are of two types: the normal cell and the reduction cell

. All normal cells are constrained to have the same architecture, as are reduction cells, but the architecture of the normal cells is independent of that of the reduction cells. Other than this, the only difference between them is that every application of the reduction cell is followed by a stride of 2 that reduces the image size, whereas normal cells preserve the image size. As can be seen in the figure, normal cells are arranged in three stacks of N cells. The goal of the architecture-search process is to discover the architectures of the normal and reduction cells.


Figure 1: NASNet Search Space [53]. LEFT: the full outer structure (omitting skip inputs for clarity). MIDDLE: detailed view with the skip inputs. RIGHT: cell example. Dotted line demarcates a pairwise combination.

As depicted in Figure 1

(middle and right), each cell has two input activation tensors and one output. The very first cell takes two copies of the input image. After that, the inputs are the outputs of the previous two cells.

Both normal and reduction cells must conform to the following construction. The two cell input tensors are considered hidden states “0” and “1”. More hidden states are then constructed through pairwise combinations. A pairwise combination is depicted in Figure 1 (right, inside dashed circle). It consists in applying an operation (or op) to an existing hidden state, applying another op to another existing hidden state, and adding the results to produce a new hidden state. Ops belong to a fixed set of common convnet operations such as convolutions and pooling layers. Repeating hidden states or operations within a combination is permitted. In the cell example of Figure 1

(right), the first pairwise combination applies a 3x3 average pool op to hidden state 0 and a 3x3 max pool op to hidden state 1, in order to produce hidden state 2. The next pairwise combination can now choose from hidden states 0, 1, and 2 to produce hidden state 3 (chose 0 and 1 in Figure 

1), and so on. After exactly five pairwise combinations, any hidden states that remain unused (hidden states 5 and 6 in Figure 1) are concatenated to form the output of the cell (hidden state 7).

A given architecture is fully specified by the five pairwise combinations that make up the normal cell and the five that make up the reduction cell. Once the architecture is specified, the model still has two free parameters that can be used to alter its size (and its accuracy): the number of normal cells per stack (N) and the number of output filters of the convolution ops (F). N and F are determined manually.

Evolutionary Algorithm

The evolutionary method we used is summarized in Algorithm 1. It keeps a population of P trained models throughout the experiment. The population is initialized with models with random architectures (“while ” in Algorithm 1). All architectures that conform to the search space described are possible and equally likely.

empty queue The population.
Will contain all models.
while  do Initialize population.
     add to right of
     add to history
end while
while  do Evolve for cycles.
      Parent candidates.
     while  do
          random element from
          The element stays in the .
         add to
     end while
      highest-accuracy model in
     add to right of
     add to
     remove from left of Oldest.
end while
return highest-accuracy model in
Algorithm 1 Aging Evolution

After this, evolution improves the initial population in cycles (“while ” in Algorithm 1). At each cycle, it samples S random models from the population, each drawn uniformly at random with replacement. The model with the highest validation fitness within this sample is selected as the parent. A new architecture, called the child, is constructed from the parent by the application of a transformation called a mutation. A mutation causes a simple and random modification of the architecture and is described in detail below. Once the child architecture is constructed, it is then trained, evaluated, and added to the population. This process is called tournament selection [19].

It is common in tournament selection to keep the population size fixed at the initial value P. This is often accomplished with an additional step within each cycle: discarding (or killing) the worst model in the random S-sample. We will refer to this approach as non-aging evolution. In contrast, in this paper we prefer a novel approach: killing the oldest model in the population—that is, removing from the population the model that was trained the earliest (“remove dead from left of pop” in Algorithm 1). This favors the newer models in the population. We will refer to this approach as aging evolution. In the context of architecture search, aging evolution allows us to explore the search space more, instead of zooming in on good models too early, as non-aging evolution would (see Discussion section for details).

In practice, this algorithm is parallelized by distributing the “while ” loop in Algorithm 1 over multiple workers. A full implementation can be found online.111 Intuitively, the mutations can be thought of as providing exploration, while the parent selection provides exploitation. The parameter controls the aggressiveness of the exploitation: reduces to a type of random search and leads to evolution of varying greediness.

New models are constructed by applying a mutation to existing models, transforming their architectures in random ways. To navigate the NASNet search space described above, we use two main mutations that we call the hidden state mutation and the op mutation. A third mutation, the identity, is also possible. Only one of these mutations is applied in each cycle, choosing between them at random.

Figure 2: Illustration of the two mutation types.

The hidden state mutation consists of first making a random choice of whether to modify the normal cell or the reduction cell. Once a cell is chosen, the mutation picks one of the five pairwise combinations uniformly at random. Once the pairwise combination is picked, one of the two elements of the pair is chosen uniformly at random. The chosen element has one hidden state. This hidden state is now replaced with another hidden state from within the cell, subject to the constraint that no loops are formed (to keep the feed-forward nature of the convnet). Figure 2 (top) shows an example.

The op mutation behaves like the hidden state mutation as far as choosing one of the two cells, one of the five pairwise combinations, and one of the two elements of the pair. Then it differs in that it modifies the op instead of the hidden state. It does this by replacing the existing op with a random choice from a fixed list of ops (see Methods Details). Figure 2 (bottom) shows an example.

Baseline Algorithms

Our main baseline is the application of RL to the same search space. RL was implemented using the algorithm and code in the baseline study [53]. An LSTM controller outputs the architectures, constructing the pairwise combinations one at a time, and then gets a reward for each architecture by training and evaluating it. More detail can be found in the baseline study. We also compared against random search (RS). In our RS implementation, each model is constructed randomly so that all models in the search space are equally likely, as in the initial population in the evolutionary algorithm. In other words, the models in RS experiments are not constructed by mutating existing models, so as to make new models independent from previous ones.

Experimental Setup

We ran controlled comparisons at scale, ensuring identical conditions for evolution, RL and random search (RS). In particular, all methods used the same computer code for network construction, training and evaluation. Experiments always searched on the CIFAR-10 dataset [26].

As in the baseline study, we first performed architecture search over small models (i.e. small N and F) until 20k models were evaluated. After that, we used the model augmentation trick [53]: we took architectures discovered by the search (e.g. the output of an evolutionary experiment) and turn them into a full-size, accurate models. To accomplish this, we enlarged the models by increasing N and F so the resulting model sizes would match the baselines, and we trained the enlarged models for a longer time on the CIFAR-10 or the ImageNet classification datasets [26, 12]. For ImageNet, a stem was added at the input of the model to reduce the image size, as shown in Figure 5 (left). This is the same procedure as in the baseline study. To produce the largest model (see last paragraph of Results section; not included in tables), we increased N and F until we ran out of memory. Actual values of N and F for all models are listed in the Methods Details section.

Methods Details

This section complements the Methods section with the details necessary to reproduce our experiments. Possible ops: none (identity); 3x3, 5x5 and 7x7 separable (sep.) convolutions (convs.); 3x3 average (avg.) pool; 3x3 max pool; 3x3 dilated (dil.) sep. conv.; 1x7 then 7x1 conv. Evolved with =, =. CIFAR-10 dataset [26] with 5k withheld examples for validation. Standard ImageNet dataset [12]

, 1.2M 331x331 images and 1k classes; 50k examples withheld for validation; standard validation set used for testing. During the search phase, each model trained for 25 epochs; N=3/F=24, 1 GPU. Each experiment ran on 450 K40 GPUs for 20k models (approx. 7 days). F refers to the number of filters of convolutions in the first stack; after each reduction cell, this number is doubled. To optimize evolution, we tried 5 configurations with P/S of: 100/2, 100/50, 20/20, 100/25, 64/16, best was 100/25. The probability of the identity mutation was fixed at the small, arbitrary value of 0.05 and was not tuned. Other mutation probabilities were uniform, as described in the Methods. To optimize RL, started with parameters already tuned in the baseline study and further optimized learning rate in 8 configurations: 0.00003, 0.00006, 0.00012, 0.0002, 0.0004, 0.0008, 0.0016, 0.0032; best was 0.0008. To avoid selection bias, plots do not include optimization runs, as was decided a priori. Best few (20) models were selected from each experiment and augmented to N=6/F=32, as in baseline study; batch 128, SGD with momentum rate 0.9, L2 weight decay

, initial lr 0.024 with cosine decay, 600 epochs, ScheduledDropPath to 0.7 prob; auxiliary softmax with half-weight of main softmax. For Table 1, we used N/F of 6/32 and 6/36. For ImageNet table, N/F were 6/190 and 6/204 and standard training methods [43]

: distributed sync SGD with 100 P100 GPUs; RMSProp optimizer with 0.9 decay and

=0.1, weight decay, 0.1 label smoothing, auxiliary softmax weighted by 0.4; dropout probability 0.5; ScheduledDropPath to 0.7 probability (as in baseline—note that this trick only contributes  0.3% top-1 ImageNet acc.); 0.001 initial lr, decaying every 2 epochs by 0.97. Largest model used N=6/F=448. Wherever applicable, we used the same conditions as the baseline study.


Comparison With RL and RS Baselines

Currently, reinforcement learning (RL) is the predominant method for architecture search. In fact, today’s state-of-the-art image classifiers have been obtained by architecture search with RL [53, 28]. Here we seek to compare our evolutionary approach against their RL algorithm. We performed large-scale side-by-side architecture-search experiments on CIFAR-10. We first optimized the hyper-parameters of the two approaches independently (details in Methods Details section). Then we ran 5 repeats of each of the two algorithms—and also of random search (RS).

Figure 3: Time-course of 5 identical large-scale experiments for each algorithm (evolution, RL, and RS), showing accuracy before augmentation on CIFAR-10. All experiments were stopped when 20k models were evaluated, as done in the baseline study. Note this plot does not show the compute cost of models, which was higher for the RL ones.

Figure 3 shows the model accuracy as the experiments progress, highlighting that evolution yielded more accurate models at the earlier stages, which could become important in a resource-constrained regime where the experiments may have to be stopped early (for example, when 450 GPUs for 7 days is too much). At the later stages, if we allow to run for the full 20k models (as in the baseline study), evolution produced models with similar accuracy. Both evolution and RL compared favorably against RS. It is important to note that the vertical axis of Figure 3 does not present the compute cost of the models, only their accuracy. Next, we will consider their compute cost as well.

As in the baseline study, the architecture-search experiments above were performed over small models, to be able to train them quicker. We then used the model augmentation trick [53] by which we take an architecture discovered by the search (e.g. the output of an evolutionary experiment) and turn it into a full-size, accurate model, as described in the Methods.

Figure 4: Final augmented models from 5 identical architecture-search experiments for each algorithm, on CIFAR-10. Each marker corresponds to the top models from one experiment.
Figure 5: AmoebaNet-A architecture. The overall model [53] (LEFT) and the AmoebaNet-A normal cell (MIDDLE) and reduction cell (RIGHT).

Figure 4 compares the augmented top models from the three sets of experiments. It shows test accuracy and model compute cost. The latter is measured in FLOPs, by which we mean the total count of operations in the forward pass, so lower is better. Evolved architectures had higher accuracy (and similar FLOPs) than those obtained with RS, and lower FLOPs (and similar accuracy) than those obtained with RL. Number of parameters showed similar behavior to FLOPs. Therefore, evolution occupied the ideal relative position in this graph within the scope of our case study.

So far we have been comparing evolution with our reproduction of the experiments in the baseline study, but it is also informative to compare directly against the results reported by the baseline study. We select our evolved architecture with highest validation accuracy and call it AmoebaNet-A (Figure 5). Table 1 compares its test accuracy with the top model of the baseline study, NASNet-A. Such a comparison is not entirely controlled, as we have no way of ensuring the network training code was identical and that the same number of experiments were done to obtain the final model. The table summarizes the results of training AmoebaNet-A at sizes comparable to a NASNet-A version, showing that AmoebaNet-A is slightly more accurate (when matching model size) or considerably smaller (when matching accuracy). We did not train our model at larger sizes on CIFAR-10. Instead, we moved to ImageNet to do further comparisons in the next section.

Model # Params Test Error (%)
NASNet-A (baseline) 3.3 M
AmoebaNet-A 2.6 M
AmoebaNet-A 3.2 M
Table 1: CIFAR-10 testing set results for AmoebaNet-A, compared to top model reported in the baseline study.

ImageNet Results

Following the accepted standard, we compare our top model’s classification accuracy on the popular ImageNet dataset against other top models from the literature. Again, we use AmoebaNet-A, the model with the highest validation accuracy on CIFAR-10 among our evolution experiments. We highlight that the model was evolved on CIFAR-10 and then transferred to ImageNet, so the evolved architecture cannot have overfit the ImageNet dataset. When re-trained on ImageNet, AmoebaNet-A performs comparably to the baseline for the same number of parameters (Table 2).

Model # Parameters # Multiply-Adds Top-1 / Top-5 Accuracy (%)
Incep-ResNet V2 [43] 55.8M 13.2B 80.4 / 95.3
ResNeXt-101 [47] 83.6M 31.5B 80.9 / 95.6
PolyNet [50] 92.0M 34.7B 81.3 / 95.8
Dual-Path-Net-131 [7] 79.5M 32.0B 81.5 / 95.8
GeNet-2 [46]* 156M 72.1 / 90.4
Block-QNN-B [51]* 75.7 / 92.6
Hierarchical [29]* 64M 79.7 / 94.8
NASNet-A [53] 88.9M 23.8B 82.7 / 96.2
PNASNet-5 [28] 86.1M 25.0B 82.9 / 96.2
AmoebaNet-A* 86.7M 23.1B 82.8 / 96.1
AmoebaNet-A* 469M 104B 83.9 / 96.6
Table 2: ImageNet classification results for AmoebaNet-A compared to hand-designs (top rows) and other automated methods (middle rows). The evolved AmoebaNet-A architecture (bottom rows) reaches the current state of the art (SOTA) at similar model sizes and sets a new SOTA at a larger size. All evolution-based approaches are marked with a *. We omitted Squeeze-and-Excite-Net because it was not benchmarked on the same ImageNet dataset version.

Finally, we focused on AmoebaNet-A exclusively and enlarged it, setting a new state-of-the-art accuracy on ImageNet of 83.9%/96.6% top-1/5 accuracy with 469M parameters. Such high parameter counts may be beneficial in training other models too but we have not managed to do this yet.


This section will suggest directions for future work, which we will motivate by speculating about the evolutionary process and by summarizing additional minor results. The details of these minor results have been relegated to the supplements, as they are not necessary to understand or reproduce our main results above.

Scope of results. Some of our findings may be restricted to the search spaces and datasets we used. A natural direction for future work is to extend the controlled comparison to more search spaces, datasets, and tasks, to verify generality, or to more algorithms. Supplement A presents preliminary results, performing evolutionary and RL searches over three search spaces (SP-I: same as in the Results section; SP-II: like SP-I but with more possible ops; SP-III: like SP-II but with more pairwise combinations) and three datasets (gray-scale CIFAR-10, MNIST, and gray-scale ImageNet), at a small-compute scale (on CPU, =, =). Evolution reached equal or better accuracy in all cases (Figure 6, top).

Figure 6: TOP: Comparison of the final model accuracy in five different contexts, from left to right: G-CIFAR/SP-I, G-CIFAR/SP-II, G-CIFAR/SP-III, MNIST/SP-I and G-ImageNet/SP-I. Each circle marks the top test accuracy at the end of one experiment. BOTTOM: Search progress of the experiments in the case of G-CIFAR/SP-II (LEFT, best for RL) and G-CIFAR/SP-III (RIGHT, best for evolution).

Algorithm speed. In our comparison study, Figure 3 suggested that both RL and evolution are approaching a common accuracy asymptote. That raises the question of which algorithm gets there faster. The plots indicate that evolution reaches half-maximum accuracy in roughly half the time. We abstain, nevertheless, from further quantifying this effect since it depends strongly on how speed is measured (the number of models necessary to reach accuracy depends on ; the natural choice of may be too low to be informative; etc.). Algorithm speed may be more important when exploring larger spaces, where reaching the optimum can require more compute than is available. We saw an example of this in the SP-III space, where evolution stood out (Figure 6, bottom-right). Therefore, future work could explore evolving on even larger spaces.

Model speed. The speed of individual models produced is also relevant. Figure 4 demonstrated that evolved models are faster (lower FLOPs). We speculate that asynchronous evolution may be reducing the FLOPs because it is indirectly optimizing for speed even when training for a fixed number of epochs: fast models may do well because they “reproduce” quickly even if they initially lack the higher accuracy of their slower peers. Verifying this speculation could be the subject of future work. As mentioned in the Related Work section, in this work we only considered asynchronous algorithms (as opposed to generational evolutionary methods) to ensure high resource utilization. Future work may explore how asynchronous and generational algorithms compare with regard to model accuracy.

Benefits of aging evolution. Aging evolution seemed advantageous in additional small-compute-scale experiments, shown in Figure 7 and presented in more detail in Supplement B. These were carried out on CPU instead of GPU, and used a gray-scale version of CIFAR-10, to reduce compute requirements. In the supplement, we also show that these results tend to hold when varying the dataset or the search space.

Figure 7: Small-compute-scale comparison between our aging tournament selection variant and the non-aging variant, for different population sizes (P) and sample sizes (S), showing that aging tends to be beneficial (most markers are above the line).

Understanding aging evolution and regularization. We can speculate that aging may help navigate the training noise in evolutionary experiments, as follows. Noisy training means that models may sometimes reach high accuracy just by luck. In non-aging evolution (NAE, i.e. standard tournament selection), such lucky models may remain in the population for a long time—even for the whole experiment. One lucky model, therefore, can produce many children, causing the algorithm to focus on it, reducing exploration. Under aging evolution (AE), on the other hand, all models have a short lifespan, so the population is wholly renewed frequently, leading to more diversity and more exploration. In addition, another effect may be in play, which we describe next. In AE, because models die quickly, the only way an architecture can remain in the population for a long time is by being passed down from parent to child through the generations. Each time an architecture is inherited it must be re-trained. If it produces an inaccurate model when re-trained, that model is not selected by evolution and the architecture disappears from the population. The only way for an architecture to remain in the population for a long time is to re-train well repeatedly. In other words, AE can only improve a population through the inheritance of architectures that re-train well. (In contrast, NAE can improve a population by accumulating architectures/models that were lucky when they trained the first time). That is, AE is forced to pay attention to architectures rather than models. In other words, the addition of aging involves introducing additional information to the evolutionary process: architectures should re-train well. This additional information prevents overfitting to the training noise, which makes it a form of regularization in the broader mathematical sense222 Regardless of the exact mechanism, in Supplement C we perform experiments to verify the plausibility of the conjecture that aging helps navigate noise. There we construct a toy search space where the only difficulty is a noisy evaluation. If our conjecture is true, AE should be better in that toy space too. We found this to be the case. We leave further verification of the conjecture to future work, noting that theoretical results may prove useful here.

Simplicity of aging evolution. A desirable feature of evolutionary algorithms is their simplicity. By design, the application of a mutation causes a random change. The process of constructing new architectures, therefore, is entirely random. What makes evolution different from random search is that only the good models are selected to be mutated. This selection tends to improve the population over time. In this sense, evolution is simply “random search plus selection”. In outline, the process can be described briefly: “keep a population of N models and proceed in cycles: at each cycle, copy-mutate the best of S random models and kill the oldest in the population”. Implementation-wise, we believe the methods of this paper are sufficient for a reader to understand evolution. The sophisticated nature of the RL alternative introduces complexity in its implementation: it requires back-propagation and poses challenges to parallelization [36]. Even different implementations of the same algorithm have been shown to produce different results [21]. Finally, evolution is also simple in that it has few meta-parameters, most of which do not need tuning [35]. In our study, we only adjusted 2 meta-parameters and only through a handful of attempts (see Methods Details section). In contrast, note that the RL baseline requires training an agent/controller which is often itself a neural network with many weights (such as an LSTM), and its optimization has more meta-parameters to adjust: learning rate schedule, greediness, batching, replay buffer, etc. (These meta-parameters are all in addition to the weights and training parameters of the image classifiers being searched, which are present in both approaches.) It is possible that through careful tuning, RL could be made to produce even better models than evolution, but such tuning would likely involve running many experiments, making it more costly. Evolution did not require much tuning, as described. It is also possible that random search would produce equally good models if run for a very long time, which would be very costly.

Interpreting architecture search.

Another important direction for future work is that of analyzing architecture-search experiments (regardless of the algorithm used) to try to discover new neural network design patterns. Anecdotally, for example, we found that architectures with high output vertex fan-in (number of edges into the output vertex) tend to be favored in all our experiments. In fact, the models in the final evolved populations have a mean fan-in value that is 3 standard deviations above what would be expected from randomly generated models. We verified this pattern by training various models with different fan-in values and the results confirm that accuracy increases with fan-in, as had been found in ResNeXt

[47]. Discovering broader patterns may require designing search spaces specifically for this purpose.

Additional AmoebaNets. Using variants of the evolutionary process described, we obtained three additional models, which we named AmoebaNet-B, AmoebaNet-C, and AmoebaNet-D. We describe these models and the process that led to them in detail in Supplement D, but we summarize here. AmoebaNet-B was obtained through through platform-aware architecture search over a larger version of the NASNet space. AmoebaNet-C is simply a model that showed promise early on in the above experiments by reaching high accuracy with relatively few parameters; we mention it here for completeness, as it has been referenced in other work[11]. AmoebaNet-D was obtained by manually extrapolating the evolutionary process and optimizing the resulting architecture for training speed. It is very efficient: AmoebaNet-D won the Stanford DAWNBench competition for lowest training cost on ImageNet[9].


This paper used an evolutionary algorithm to discover image classifier architectures. Our contributions are the following:

  • We proposed aging evolution, a variant of tournament selection by which genotypes die according to their age, favoring the young. This improved upon standard tournament selection while still allowing for efficiency at scale through asynchronous population updating. We open-sourced the code.333 We also implemented simple mutations that permit the application of evolution to the popular NASNet search space.

  • We presented the first controlled comparison of algorithms for image classifier architecture search in a case study of evolution, RL and random search. We showed that evolution had somewhat faster search speed and stood out in the regime of scarcer resources / early stopping. Evolution also matched RL in final model quality, employing a simpler method.

  • We evolved AmoebaNet-A (Figure 5), a competitive image classifier. On ImageNet, it is the first evolved model to surpass hand-designs. Matching size, AmoebaNet-A has comparable accuracy to top image-classifiers discovered with other architecture-search methods. At large size, it sets a new state-of-the-art accuracy. We open-sourced code and checkpoint.444


We wish to thank Megan Kacholia, Vincent Vanhoucke, Xiaoqiang Zheng and especially Jeff Dean for their support and valuable input; Chris Ying for his work helping tune AmoebaNet models and for his help with specialized hardware, Barret Zoph and Vijay Vasudevan for help with the code and experiments used in their paper [53], as well as Jiquan Ngiam, Jacques Pienaar, Arno Eigenwillig, Jianwei Xie, Derek Murray, Gabriel Bender, Golnaz Ghiasi, Saurabh Saxena and Jie Tan for other coding contributions; Jacques Pienaar, Luke Metz, Chris Ying and Andrew Selle for manuscript comments, all the above and Patrick Nguyen, Samy Bengio, Geoffrey Hinton, Risto Miikkulainen, Jeff Clune, Kenneth Stanley, Yifeng Lu, David Dohan, David So, David Ha, Vishy Tirumalashetty, Yoram Singer, and Ruoming Pang for helpful discussions; and the larger Google Brain team.


  • [1] P. J. Angeline, G. M. Saunders, and J. B. Pollack.

    An evolutionary algorithm that constructs recurrent neural networks.

    IEEE transactions on Neural Networks, 1994.
  • [2] B. Baker, O. Gupta, N. Naik, and R. Raskar. Designing neural network architectures using reinforcement learning. In ICLR, 2017.
  • [3] B. Baker, O. Gupta, R. Raskar, and N. Naik. Accelerating neural architecture search using performance prediction. ICLR Workshop, 2017.
  • [4] J. Bergstra and Y. Bengio. Random search for hyper-parameter optimization. JMLR, 2012.
  • [5] A. Brock, T. Lim, J. M. Ritchie, and N. Weston. Smash: one-shot model architecture search through hypernetworks. In ICLR, 2018.
  • [6] H. Cai, T. Chen, W. Zhang, Y. Yu, and J. Wang. Efficient architecture search by network transformation. In AAAI, 2018.
  • [7] Y. Chen, J. Li, H. Xiao, X. Jin, S. Yan, and J. Feng. Dual path networks. In NIPS, 2017.
  • [8] D. Ciregan, U. Meier, and J. Schmidhuber. Multi-column deep neural networks for image classification. In CVPR, 2012.
  • [9] C. Coleman, D. Kang, D. Narayanan, L. Nardi, T. Zhao, J. Zhang, P. Bailis, K. Olukotun, C. Re, and M. Zaharia. Analysis of dawnbench, a time-to-accuracy machine learning performance benchmark. arXiv preprint arXiv:1806.01427, 2018.
  • [10] C. Cortes, X. Gonzalvo, V. Kuznetsov, M. Mohri, and S. Yang. Adanet: Adaptive structural learning of artificial neural networks. In ICML, 2017.
  • [11] E. D. Cubuk, B. Zoph, D. Mane, V. Vasudevan, and Q. V. Le. Autoaugment: Learning augmentation policies from data. arXiv, 2018.
  • [12] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, 2009.
  • [13] T. Domhan, J. T. Springenberg, and F. Hutter.

    Speeding up automatic hyperparameter optimization of deep neural networks by extrapolation of learning curves.

    In IJCAI, 2017.
  • [14] T. Elsken, J.-H. Metzen, and F. Hutter. Simple and efficient architecture search for convolutional neural networks. ICLR Workshop, 2017.
  • [15] T. Elsken, J. H. Metzen, and F. Hutter. Neural architecture search: A survey. arXiv, 2018.
  • [16] S. E. Fahlman and C. Lebiere. The cascade-correlation learning architecture. In NIPS, 1990.
  • [17] M. Feurer, A. Klein, K. Eggensperger, J. Springenberg, M. Blum, and F. Hutter.

    Efficient and robust automated machine learning.

    In NIPS, 2015.
  • [18] D. Floreano, P. Dürr, and C. Mattiussi. Neuroevolution: from architectures to learning. Evolutionary Intelligence, 2008.
  • [19] D. E. Goldberg and K. Deb.

    A comparative analysis of selection schemes used in genetic algorithms.

    FOGA, 1991.
  • [20] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016.
  • [21] P. Henderson, R. Islam, P. Bachman, J. Pineau, D. Precup, and D. Meger. Deep reinforcement learning that matters. AAAI, 2018.
  • [22] G. S. Hornby. Alps: the age-layered population structure for reducing the problem of premature convergence. In GECCO, 2006.
  • [23] J. Hu, L. Shen, and G. Sun. Squeeze-and-excitation networks. CVPR, 2018.
  • [24] G. Huang, Z. Liu, K. Q. Weinberger, and L. van der Maaten. Densely connected convolutional networks. In CVPR, 2017.
  • [25] A. Klein, S. Falkner, J. T. Springenberg, and F. Hutter. Learning curve prediction with bayesian neural networks. ICLR, 2017.
  • [26] A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Master’s thesis, Dept. of Computer Science, U. of Toronto, 2009.
  • [27] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012.
  • [28] C. Liu, B. Zoph, J. Shlens, W. Hua, L.-J. Li, L. Fei-Fei, A. Yuille, J. Huang, and K. Murphy. Progressive neural architecture search. ECCV, 2018.
  • [29] H. Liu, K. Simonyan, O. Vinyals, C. Fernando, and K. Kavukcuoglu. Hierarchical representations for efficient architecture search. In ICLR, 2018.
  • [30] H. Mendoza, A. Klein, M. Feurer, J. T. Springenberg, and F. Hutter. Towards automatically-tuned neural networks. In Workshop on Automatic Machine Learning, 2016.
  • [31] R. Miikkulainen, J. Liang, E. Meyerson, A. Rawal, D. Fink, O. Francon, B. Raju, A. Navruzyan, N. Duffy, and B. Hodjat. Evolving deep neural networks. arXiv, 2017.
  • [32] G. F. Miller, P. M. Todd, and S. U. Hegde. Designing neural networks using genetic algorithms. In ICGA, 1989.
  • [33] R. Negrinho and G. Gordon. Deeparchitect: Automatically designing and training deep architectures. arXiv, 2017.
  • [34] H. Pham, M. Y. Guan, B. Zoph, Q. V. Le, and J. Dean. Faster discovery of neural architectures by searching for paths in a large model. ICLR Workshop, 2018.
  • [35] E. Real, S. Moore, A. Selle, S. Saxena, Y. L. Suematsu, Q. Le, and A. Kurakin. Large-scale evolution of image classifiers. In ICML, 2017.
  • [36] T. Salimans, J. Ho, X. Chen, and I. Sutskever. Evolution strategies as a scalable alternative to reinforcement learning. arXiv, 2017.
  • [37] S. Saxena and J. Verbeek. Convolutional neural fabrics. In NIPS, 2016.
  • [38] J. P. Simmons, L. D. Nelson, and U. Simonsohn. False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 2011.
  • [39] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. JMLR, 2014.
  • [40] K. O. Stanley, B. D. Bryant, and R. Miikkulainen. Real-time neuroevolution in the nero video game. TEVC, 2005.
  • [41] K. O. Stanley and R. Miikkulainen. Evolving neural networks through augmenting topologies. Evol. Comput., 2002.
  • [42] M. Suganuma, S. Shirakawa, and T. Nagao. A genetic programming approach to designing convolutional neural network architectures. In GECCO, 2017.
  • [43] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi.

    Inception-v4, inception-resnet and the impact of residual connections on learning.

    In AAAI, 2017.
  • [44] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In CVPR, 2015.
  • [45] L. Wan, M. Zeiler, S. Zhang, Y. Le Cun, and R. Fergus. Regularization of neural networks using dropconnect. In ICML, 2013.
  • [46] L. Xie and A. Yuille. Genetic CNN. In ICCV, 2017.
  • [47] S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He. Aggregated residual transformations for deep neural networks. In CVPR, 2017.
  • [48] X. Yao. Evolving artificial neural networks. IEEE, 1999.
  • [49] S. Zagoruyko and N. Komodakis. Wide residual networks. In BMVC, 2016.
  • [50] X. Zhang, Z. Li, C. C. Loy, and D. Lin. Polynet: A pursuit of structural diversity in very deep networks. In CVPR, 2017.
  • [51] Z. Zhong, J. Yan, and C.-L. Liu. Practical network blocks design with q-learning. In AAAI, 2018.
  • [52] B. Zoph and Q. V. Le. Neural architecture search with reinforcement learning. In ICLR, 2016.
  • [53] B. Zoph, V. Vasudevan, J. Shlens, and Q. V. Le. Learning transferable architectures for scalable image recognition. In CVPR, 2018.


In this supplement, we will extend the comparison between evolution and reinforcement learning (RL) from the Results Section. Evolutionary algorithms and RL have been applied recently to the field of architecture search. Yet, comparison is difficult because studies tend to use novel search spaces, preventing direct attribution of the results to the algorithm. For example, the search space may be small instead of the algorithm being fast. The picture is blurred further by the use of different training techniques that affect model accuracy [8, 45, 39], different definitions of FLOPs that affect model compute cost555For example, see and different hardware platforms that affect algorithm run-time666A Tesla P100 can be twice as fast as a K40, for example.. Accounting for all these factors, we will compare the two approaches in a variety of image classification contexts. To achieve statistical confidence, we will present repeated experiments without sampling bias.


All evolution and RL experiments used the NASNet search space design [53]. Within this design, we define three concrete search spaces that differ in the number of pairwise combinations (C) and in the number of ops allowed (see Methods Section). In order of increasing size, we will refer to them as SP-I (e.g. Figure LABEL:small_space_subfig), SP-II, and SP-III (e.g. Figure LABEL:small_bigspace_subfig). SP-I is the exact variant used in the main text and in the study that we use as our baseline [53]. SP-II increases the allowed ops from 8 to 19 (identity; 1x1 and 3x3 convs.; 3x3, 5x5 and 7x7 sep. convs.; 2x2 and 3x3 avg. pools; 2x2 min pool.; 2x2 and 3x3 max pools; 3x3, 5x5 and 7x7 dil. sep. convs.; 1x3 then 3x1 conv.; 1x7 then 7x1 conv.; 3x3 dil. conv. with rates 2, 4 and 6). SP-III allows for larger tree structures within the cells (=, same 19 ops).

The evolutionary algorithm is the same as that in the main text. The RL algorithm is the one used in the baseline study. We chose this baseline because, when we began, it had obtained the most accurate results on CIFAR-10, a popular dataset for image classifier architecture search.

We ran evolution and RL experiments for comparison purposes at different compute scales, always ensuring both approaches used identical conditions. In particular, evolution and RL used the same code for network construction, training and evaluation. The experiments in this supplement were performed at a smaller compute scale than in the main text, to reduce resource usage: we used gray-scale versions of popular datasets (e.g. “G-Imagenet” instead of ImageNet), we ran on CPU instead of GPU and trained relatively small models (F=8, see Methods Details in main text) for only 4 epochs. Where unstated, the experiments ran on SP-I and G-CIFAR.


We first optimized the meta-parameters for evolution and for RL by running experiments with each algorithm, repeatedly, under each condition (Figure LABEL:small_metaparams_subfig). We then compared the algorithms in 5 different contexts by swapping the dataset or the search space (Figure LABEL:small_contexts20k_subfig). Evolution was either better than or equal to RL, with statistical significance. The best contexts for evolution and for RL are shown in more detail in Figures LABEL:small_bigcell_subfig and LABEL:small_many_subfig, respectively. They show the progress of 5 repeats of each algorithm. The initial speed of evolution is noticeable, especially in the largest search space (SP-III). Figures LABEL:small_space_subfig and LABEL:small_bigspace_subfig illustrate the top architectures from SP-I and SP-III, respectively. Regardless of context, Figure LABEL:small_contexts5k_subfig indicates that accuracy under evolution increases significantly faster than RL at the initial stage. This stage was not accelerated by higher RL learning rates.


The main text provides a comparison between algorithms for image classifier architecture search in the context of the SP-I search space on CIFAR-10, at scale. This supplement extends those results, varying the dataset and the search space by running many small experiments, confirming the conclusions of the main text.

Supplement B: Aging and Non-Aging Evolution


In this supplement, we will extend the comparison between aging evolution (AE) and standard tournament selection / non-aging evolution (NAE). As was described in the Methods Section, the evolutionary algorithm used in this paper keeps the population size constant by always removing the oldest model whenever a new one is added; we will refer to this algorithm as AE. A recent paper used a similar method but kept the population size constant by removing the worst model in each tournament [35]; we will refer to that algorithm as NAE. This supplement will show how these two algorithms compare in a variety of contexts.


The search spaces and datasets were the same as in Supplement A.


Figure B-1: A comparison of NAE and AE under 5 different contexts, spanning different datasets and search spaces: G-CIFAR/SP-I, G-CIFAR/SP-II, G-CIFAR/SP-III, MNIST/SP-I and G-ImageNet/SP-I, shown from left to right. For each context, we show the final MTA of a few NAE and a few AE experiments (circles) in adjacent columns. We superpose

error bars, where SEM denotes the standard error of the mean. The first context contains many repeats with identical meta-parameters and their MTA values seem normally distributed (Shapiro–Wilks test). Under this normality assumption, the error bars represent 95% confidence intervals.

We performed experiments in 5 different search space–dataset contexts. In each context, we ran several repeats of evolutionary search using NAE and AE (Figure B-1). Under 4 of the 5 contexts, AE resulted in statistically significant higher accuracy at the end of the runs, on average. The exception was the G-ImageNet search space, where the experiments were extremely short due to the compute demands of training on so much data using only CPUs. Interestingly, in the two contexts where the search space was bigger (SP-II and SP-III), all AE runs did better than all NAE runs.

Additionally, we performed three experiments comparing AE and NAE at scale, under the same conditions as in the main text. The results, which can be seen in Figure B-2, provide some verification that observations from smaller CPU experiments in the previous paragraph generalize to the large-compute regime.

Figure B-2: A comparison of AE and NAE at scale. These experiments use the same conditions as the main text (including dataset, search space, resources and duration). From top to bottom: an AE experiment with good AE meta-parameters from Supplement A, an analogous NAE experiment, and an NAE experiment with the meta-parameters used in a recent study [35]. These accuracy values are not meaningful in absolute terms, as the models need to be augmented to reach their maximum accuracy, as described in the Methods Section).


The Discussion Section in the main text suggested that AE tends to perform better than NAE across various parameters for one fixed search space–dataset context. Such robustness is desirable for computationally demanding architecture search experiments, where we cannot always afford many runs to optimize the meta-parameters. This supplement extends those results to show that the conclusion holds across various contexts.

Supplement C: Aging Evolution in Toy Search Space


As indicated in the Discussion Section, we suspect that aging may help navigate the noisy evaluation in an evolution experiment. We leave verification of this suspicion to future work, but for motivation we provide here a sanity check for it. We construct a toy search space in which the only difficulty is a noisy evaluation. Within this toy search space, we will see that aging evolution outperforms non-aging evolution.


The toy search space we use here does not involve any neural networks. The goal is to evolve solutions to a very simple, single-optimum, D-dimensional, noisy optimization problem with a signal-to-noise ratio matching that of our neuro-evolution experiments.

The search space used is the set of vertices of a D-dimensional unit cube. A specific vertex is “analogous” to a neural network architecture in a real experiment. A vertex can be represented as a sequence of its coordinates (0s and 1s)—a bit-string. In other words, this bit-string constitutes a simulated architecture. In a real experiment, training and evaluating an architecture yields a noisy accuracy. Likewise, in this toy search space, we assign a noisy simulated accuracy (SA) to each cube vertex. The SA is the fraction of coordinates that are zero, plus a small amount of Gaussian noise (, , matching the observed noise for neural networks). Thus, the goal is to get close to the optimum, the origin. The sample complexity used was 10k. This space is helpful because an experiment completes in milliseconds.

This optimization problem can be seen as a simplification of the evolutionary search for the minimum of a multi-dimensional integer-valued paraboloid with bounded support, where the mutations treat the values along each coordinate categorically. If we restrict the domain along each direction to the set {0, 1}, we reduce the problem to the unit cube described above. The paraboloid’s value at a cube corner is just the number of coordinates that are not zero. We mention this connection because searching for the minimum of a paraboloid seems like a more natural choice for a trivial problem (“trivial” compared to architecture search). The simpler unit cube version, however, was chosen because it permits faster computation.

We stress that these simulations are not intended to truly mimic architecture search experiments over the space of neural networks. We used them only as a testing ground for techniques that evolve solutions in the presence of noisy evaluations.


We found that optimized NAE and AE perform similarly in low-dimensional problems, which are easier. As the dimensionality (D) increases, AE becomes relatively better than NAE (Figure C-1).

Figure C-1: Results in the toy search space. The graph summarizes thousands of evolutionary search simulations. The vertical axis measures the simulated accuracy (SA) and the horizontal axis the dimensionality (D) of the problem, a measure of its difficulty. For each D, we optimized the meta-parameters for NAE and AE independently. To do this, we carried out 100 simulations for each meta-parameter combination and averaged the outcomes. We plot here the optima found, together with error bars. The graph shows that in this toy search space, AE is never worse and is significantly better for larger D (note the broad range of the vertical axis).


The findings provide circumstantial evidence in favor of our suspicion that aging may help navigate noise (Discussion Section), suggesting that attempting to verify this with more generality may be an interesting direction for future work.

Supplement D: Additional AmoebaNets

Figure D-1: Architectures of overall model and cells. From left to right: outline of the overall model [53] and diagrams for the cell architectures discovered by evolution: AmoebaNet-B, AmoebaNet-C, and AmoebaNet-D. The three normal cells are on the top row and the three reduction cells are on the bottom row. The labeled activations or hidden states correspond to the cell inputs (“0” and “1”) and the cell output (“7”).


In the Discussion Section, we briefly mentioned three additional models, AmoebaNet-B, AmoebaNet-C, and AmoebaNet-D. While all three used the aging evolution algorithm presented the main text, there were some differences in the experimental setups: AmoebaNet-B was obtained through platform-aware architecture search; AmoebaNet-C was selected with a pareto-optimal criterion; and AmoebaNet-D involved multi-stage search, including manual extrapolation of the evolutionary process. Below we describe each of these models and the methods that produced them.


AmoebaNet-B was evolved by running experiments directly on Google TPUv2 hardware, since this was the target platform for its final evaluation. In the main text, the architecture had been discovered on GPU but the largest model was evaluated on TPUs. In contrast, here we perform the full process on TPUs. This architecture-aware approach allows the evolutionary search to optimize even hardware-dependent aspects of the final accuracy, such as optimizations carried out by the compiler. The search setup was as in the main text, except that it used the larger SP-II space of Supplement A and trained larger models (F=32) for longer (50 epochs). The selection of the top model was as follows. We picked from the experiment K=100 models. To do this, we binned the models by their number of parameters to cover the range, using B bins. From each bin, we took the top K/B models by validation accuracy. We then augmented all models to N=6 and F=32 and selected the one with the top validation accuracy.


was discovered in the experiments described in the main text (see Methods and Methods Details sections). Instead of selecting the highest validation accuracy at the end of the experiments (as done in the main text), we picked a promising model while the experiments were still ongoing. This was done entirely for expediency, to be able to study a model while we waited for the search to complete. AmoebaNet-C was promising in that it stood out in a pareto-optimal sense: it was a high-accuracy outlier for its relatively small number of parameters. As opposed to all other architectures, AmoebaNet-C was selected based on CIFAR-10 test accuracy, because it was intended to only be benchmarked on ImageNet. This process was less methodical than the one used in the main text but because the model has been cited in the literature, we include it here for completeness.

AmoebaNet-D was obtained by manually modifying AmoebaNet-B by extrapolating evolution. To do this, we studied the progress of an experiment and identified which mutations were still causing improvements in fitness at the later stages of the process. By inspection, we found these mutations to be: replacing a 3x3 separable (sep.) convolution (conv.) with a 1x7 followed by 7x1 conv. in the normal cell, replacing a 5x5 sep. conv. by a 1x7 followed by 7x1 conv. in the reduction cell, and replacing a 3x3 sep. conv. with 3x3 avg. pool in the reduction cell. Additionally, we reduced the numeric precision from 32-bit to 16-bit floats, and set a learning rate schedule of step-wise decay, reducing by a factor of 0.88 every epoch. We trained for 35 epochs in total. To submit to Stanford DAWNBench (see Outcome section), we used N=2 and F=256.


Figure D-1 presents all three model architectures. We refrain from benchmarking these here. Instead, in the Outcome Section below, we will refer the reader to results presented elsewhere.


In this supplement we have described additional evolutionary experiments that led to three new models. Such experiments were intended mainly to search for better models. Due to the resource-intensive nature of these methods, we forewent ablations and baselines in this supplement. For a more empirically rigorous approach, please refer to the process that produced AmoebaNet-A in the main text.

AmoebaNet-B had set a new state of the art on CIFAR-10 (2.13% test error) in a previous preprint of this paper777Version 1 with same title on arXiv: after being trained with cutout, but has since been superseded.

AmoebaNet-C had set the previous state-of-the-art top-1 accuracy on ImageNet after being trained with advanced data augmentation techniques in [11].

AmoebaNet-D won the Stanford DAWNBench competition for lowest training cost on ImageNet. The goal of this competition category was to minimize the monetary cost of training a model to 93% top-5 accuracy. AmoebaNet-D costs $49.30 to train. This was 16% better than the second-best model, which was ResNet and which trained on the same hardware. The results were published in [9].