1 Introduction
Most modern neural network models can be thought of as comprising two components: an input component that converts raw (possibly categorical) input data into floating point values; and a representation learning component that combines the outputs of the input component and computes the final output of the model. Designing neural network architectures in an automated, data driven manner (AutoML) has recently attracted a lot of research interest, since the publication of zoph2017 . However, previous research in this area has primarily focused on automated design of the representation learning component, and little attention has been paid to the input component. This is because most research has been conducted on image understanding problems enas ; zoph2018 ; snas ; liu2018darts , where the representation learning component is very important to model performance, while the input component is trivial since the image pixels are already in floating point form.
For large scale recommendation problems commonly encountered in industry, the situation is quite different. While the representation learning component is important, the input component plays an even more critical role in the model. This is because many recommendation problems involve categorical features with large cardinality, and the input component assigns embedding vectors to each item of these discrete features. This results in a huge number of embedding parameters in the input component, which dominate both the size and the inductive bias of the model. For example, the YouTube video recommendation model (youtube_paper ) uses a video ID vocabulary of size 1 million, with 256 dimensional embedding vectors for each ID. This means 256 million parameters are used just for the video ID feature, and the number grows quickly as more discrete features are added. In contrast, the representation learning component consists of only three fully connected layers. So the number of model parameters is heavily concentrated in the input component, which naturally has high impact on model performance. In practice, despite their importance, vocabulary and embedding sizes for discrete features are often selected heuristically, by trying out many models with different manually crafted configurations. Since these models are usually large and expensive to train, such an approach is computationally intensive and may result in suboptimal results.
In this paper, we propose Neural Input Search (NIS), a novel approach to find embedding and vocabulary sizes automatically for each discrete feature in the model’s input component. We create a search space consisting of a collection of Embedding Blocks, where each combination of blocks represents a different vocabulary and embedding configuration. The optimal configuration is searched for in a single training run, using a reinforcementlearning algorithm like ENAS enas . Moreover, we propose a novel type of embedding, which we call Multisize Embedding (ME)
. ME allows allocating larger embedding vectors to more common or predictive feature items, and smaller vectors to less common or predictive ones. This is in contrast to a commonly employed approach, which we call Singlesize Embedding (SE), where the samesized embeddings is used across all items in the vocabulary. We argue that SE is an inefficient use of the model’s capacity and training data. This is because that we need a large embedding dimension for frequent or highly predictive items to encode their nuanced relation with other items, but training good embeddings of the same size for long tail items may take too many epochs due to their rarity in the training set. And when training data is limited, largesized embeddings for rare items can overfit. With ME, given the same model capacity, we can cover more items in the vocabulary, while reducing the required training data size and computation cost for training good embeddings for long tail items.
We demonstrate the effectiveness of NIS at finding good configurations of vocabulary and embedding sizes for both SEs and MEs through experiments on two common types of recommendation problems, namely retrieval and ranking, using data collected from our company’s products. In our experiments, NIS is able to automatically find configurations that result in relative improvement on Recall@1 and on ROCAUC over well established manually crafted baselines in a single training run.
2 Related Work
Neural Architecture Search (NAS) has been an active research area since zoph2017 , which takes a Reinforcement Learning approach that requires training thousands of candidate models to convergence. Due to its resource intensive nature, a lot of research has focused on developing cheaper NAS methods. One active research direction is to design a large model that connects smaller model components, so that different candidate architectures can be expressed by selecting a subset of the components. The optimal set of components (and thus the architecture) is learned in a single training run. For exmaple, ENAS (enas ) uses a controller to sample the submodels, and SMASH (smash ) generates weights for sampled networks using a hypernetwork. DARTS (liu2018darts ) and SNAS (snas
) takes a differentiable approach by representing the connection as a weight, which is optimized with backpropagation. A similar approach in combination of ScheduledDropPath (
zoph2018 ) on the weights is taken in bender2018 and proxylessnas . Luo et al. nao takes another approach by mapping the neural architectures into an embedding space, where the optimal embedding is learned and decoded back to the final architecture.Another research direction is to reduce the size of the search space. real2018 ; Zhong_2018_CVPR ; Liu_2018_ECCV ; cai2018 propose searching convolution cells, which are later stacked repeatedly into a deep network. Zoph et al. zoph2018
developed the NASNet architecture and showed the cells learned from smaller datasets can achieve good results even on larger datasets in a transfer learning setting. MNAS
mnas proposed a search space comprised of a hierarchy of convolution cell blocks, where cells in different blocks are searched separately and thus may results in different structures.Almost all previous NAS research works have focused on finding the optimal representation learning component for image/video understanding problems. For large scale recommendation problems, great results have also been reported by leveraging advanced representation learning components, such as CNN (kim2016 , oord2013 ), RNN (bansal2016 , donkers2017 ), etc. However, the input component, although contains a great portion of model parameters due to largesized embeddings, has been frequently designed heuristically across industry, such as YouTube (youtube_paper ), Google Play (wide_deep ), Netflix (netflix ), etc. Our work, to the best of our knowledge, for the first time brings automated neural network design into the input component for large scale recommendation problems.
3 Neural Input Search
3.1 Definitions and Notations
We assume that the model input consists of a set of categorical features . Each input example can contain any number of values per feature. For each feature , we have a list of its possible values, sorted in decreasing order of frequency of occurrence in the dataset. This list implicity maps each feature value to an integer: we refer to this list as a vocabulary. An embedding variable is a trainable matrix. If it’s shape is , then is referred to as the vocabulary size and as the embedding dimension. For any , we use to refer to the row the embedding matrix , i.e. the embedding vector of the item within the vocabulary. Throughout the paper, we use to refer to our ‘memory budget’, the total number of floating point values the embedding matrices of the model can use. A shaped embedding matrix uses values.
3.2 Neural Input Search Problems
We start with introducing our first proposed Neural Input Search problem based on the regular embedding matrix, which we call Singlesize Embedding:
Singlesize Embedding (SE)
A singlesize embedding is a regular embedding matrix with shape , where each of the items within the vocabulary is represented as an dimensional vector. As stated in Section 1, most previous works use SEs to represent discrete features, and the value of and for each feature is selected in a heuristic manner, which can be suboptimal. Below we propose a Neural Input Search problem, namely NISSE, for automatically finding the optimal SE for each feature, and the approach for solving this problem is introduced later in Section 3.3.
Problem 1 (NisSe)
Find a vocabulary size and embedding dimension for each to maximize the objective function value of the resulting neural network, subject to:
The problem involves two tradeoffs:

Memory budget between features: More useful features should get a higher budget.

Memory budget between vocabulary size and embedding dimension within each feature.
A large vocabulary for a feature gives us higher coverage, letting us include tail items as input signal. A large embedding dimension improves our predictions for head items, since head items have more training data and larger embeddings can encode more nuanced information. SE makes it difficult to simultaneously obtain high coverage and high quality embeddings within the memory budget. To conquer this difficulty, we introduce a novel type of embedding, namely Multisize Embedding.
Multisize Embedding (ME)
Multisize Embedding allows different items in the vocabulary to have different sized embeddings. It lets us use large embeddings for head items and small embeddings for tail items. It makes sense to have fewer parameters for tail items as they have lesser training data. The vocabulary and embedding size for a variable is now given by a Multisize Embedding Spec (MES). A MES is a list of pairs: for any such that and . This can be interpreted as: the first most frequent items have embedding dimension , the next frequent items have embedding dimension , etc. The total vocabulary size is . When , an ME is equivalent to an SE.
Instead of having only one embedding matrix like in a SE, we create one embedding matrix of shape for each . Moreover, a trainable projection matrix of shape is created for each , which maps a dimensional embedding to a dimensional space. This facilitates downstream reduction operations to be conducted in the same dimensional space. Define and for to be the cumulative vocabulary size for the first embedding matrices, then the ME for item in the vocabulary is defined as
where is chosen such that , and clearly is dimensional. We remind the readers that represents the row of the matrix .
With an appropriate MES for each feature, ME is able to achieve high coverage on tail items and high quality representation of head items at the same time. However, finding the optimal MSE for all features manually is very hard, necessitating an automated approach for searching the right MESs. Below we introduce the Neural Input Search problem with Multisize Embedding, namely NISME, and the approach for solving this problem is introduced later in Section 3.3.
Problem 2 (NisMe)
Find a MES for each to maximize the objective function value of the resulting neural network, subject to:
MEs can be used as a direct replacement for SEs in any model that uses embeddings. Typically, given a set of vocabulary IDs , each element in is mapped to its corresponding SE, followed by one or more reduce operations to these SEs. For example, a commonly used reduction operation is bagofwords (BOW), where the embeddings are summed or averaged. To see how MEs can directly replace SEs in this case, the ME version of BOW, which we call MBOW, is given by:
where the MEs are summed. This is illustrated in Figure 1. Note that for the ’s whose ’s are equal, it is more efficient to sum the embeddings before applying the projection matrix.
3.3 Neural Input Search Approach
We now detail our method for solving Problems 1 and 2. As stated in the introduction, most large scale recommendation models are very expensive to train; it is desirable to solve each of these problems in one training run. To achieve this goal, we leverage a variant of ENAS (enas ): We develop a novel search space in the input component of the model, which contains the SEs or MEs we want to search over. A separate controller is used to make choices to pick an SE or ME for each discrete feature in each step. These selected SEs or MEs are trained in together with the rest of the main model (excluding the controller). In addition, we use the feedforward pass of the main model to compute a reward (a combination of accuracy and memory cost, detailed in Section 3.3.2) of the controller’s choices, and the reward is used to train the controller variables using the A3C (a3c ) policy gradient method.
3.3.1 Search Space
We now describe the search space, which is a key novel ingredient of our work.
Embedding Blocks
For a given feature with vocabulary size , we create a grid of matrices with and , where the th matrix is of size , such that , and . Here is the maximum allowed embedding size for any item within the vocabulary. We call these matrices Embedding Blocks. This can be thought as discretizing an embedding matrix of size into submatrices. As an example, suppose (‘M’ stands for million) and , we may discretize the rows into five chunks: , and discretize the columns into four chunks: , which results in Embedding Blocks, as illustrated in Figure 1(a). Moreover, a projection matrix of size is created for each , in order to map each dimensional embedding to a common dimensional space for facilitating downstream reduction operations. Clearly we should have for all . The Embedding Blocks are the building blocks of the search space that allow the controller to sample different SEs or MEs at each training step.
Controller Choices
The controller is a neural network that samples different SEs or MEs from softmax probabilities. Its exact behavior depends on whether we are optimizing over SEs or MEs. Below we describe the controller’s behavior on one feature
, and drop the subscript for notational convenience.SE: To optimize over SEs, at each training step, the controller samples one pair from the set . For a selected , only Embedding Blocks are involved in that particular training step. Therefore, the controller effectively picks an SE, such as the one within the red rectangle in Figure 1(b), which represents an SE of size . The embedding of the item in the vocabulary in this step is calculated as
for all , where , is the cumulative vocabulary size, and such that . Define to be the cumulative embedding size, it is clear that is equivalent to using a dimensional embedding to represent the item followed by a projection to a dimensional space, where the project matrix is the concatenation of along the rows. Any item whose vocabulary id is considered as outofvocabulary and is handled specially; a commonly employed approach is using zero vector as their embedding. The corresponding memory cost (the number of parameters) induced by this choice of SE is therefore computed as (the projection matrix cost is ignored, since for all ).
If the pair is selected in a training step, it is equivalent to removing the feature from the model. Thus the zero embedding is used for all items of this feature within this training step, and the corresponding memory cost is . As the controller explores different SEs, it’s trained based on the reward induced by each selection, and eventually converges to the optimal one, as described in Section 3.3.3. If it converges to the pair , it means this feature should be removed.
ME: When optimizing over MEs, instead of making a single choice, the controller makes a sequence of choices, one for each . Each choice is an . If , only Embedding Blocks are involved in that particular training step. Similarly, if , it means the whole dimensional embedding is removed for all items within the vocabulary. Therefore, the controller picks a custom subset (not just a subgrid) of Embedding Blocks, which comprises an MES. This is visually illustrated in Figure 1(c), where the first D embeddings are utilized by the first M items, the second D embeddings are utilized by all of the M items, the third D embeddings are not used by any item, while the last D embeddings have the same utilization as the first D embeddings. As a result, the first M items in the vocabulary are allocated with dimensional embeddings, while the last M items are assigned with only dimensional embeddings. In other words, an MES is realized at this training step.
Mathematically, let , then the embedding of the item in the vocabulary in this step is calculated as
for all whose corresponding is nonempty, and is an zero vector if is empty. The calculation of memory cost is straightforward: .
3.3.2 Reward
As the main model is trained with the controller’s choices of SEs or MEs, the controller is trained with the reward calculated from feedforward passes of the main model on validation set examples. Our reward can be written as , where represents the (potentially nondifferentiable) objective that we want to optimize, and is costloss, a regularization term to force the controller to keep the memory cost within our budget.
Objective: There are two different types of problems that are commonly encountered for recommendation tasks, namely retrieval problems and ranking problems (youtube_paper ).
Retrieval problems aim at finding the most relevant items out of a potentially very large vocabulary , given the model’s input. is usually in the hundreds and
is in millions. This is usually achieved by a softmax layer with
neurons, and the items with the highest softmax probability are used as the results. The objective commonly optimized for is the model’s Recall@1. However, sinceis large, computing the exact Recall@1 is too expensive to do once per controller training step. We need a cheap proxy of Recall@1. One possibility is to use sampled softmax loss. However, we observed that this is not a good proxy for Recall@1: using very large vocabularies with very small embeddings gives the best sampled softmax loss values, but not the best Recall@1. Instead, we approximate Recall@1 with Sampled Recall@1, i.e. only use the sampled negatives when calculating the recall. Thus Sampled Recall@1 is the fraction of times the logit of the true label was higher than the logits of all the sampled negative labels. We observe that Sampled Recall@1 is a good proxy for Recall@1, and we use it as the
term of our reward for retrieval problems. As Sampled Recall@1 can be calculated for each validation example, given a batch of examples, the controller can make different choices, each of which gets trained based on their own reward.Ranking problems aim at finding the best ranking of a set of items. Such problems involve binary labels (e.g. if the video is watched or not) trained with cross entropy loss. A widely used objective for ranking is the Area Under the Receiver Operating Characteristic Curve (ROCAUC). However, ROCAUC can only be computed from a collection of
examples. Therefore, given a batch of examples, the controller can only make choices, each of which should apply to examples and result in rewards. The controller will thus explore different choices slower and potentially converge slower in this setting. An alternative is to use the negative cross entropy loss as the objective. Since it can be calculated for each example, the controller can explore different choices with fewer examples. However, we observe that the controller converges to better results when is ROCAUC.Cost Loss: In Section 3.3.1 we defined a cost term based on the choice of the controller (we dropped the subscript in Section 3.3.1 to avoid cluttered notation). We compute the total cost , and define the costloss as . We remind the reader that is the predefined memory budget. Note that the costloss can be combined with other regularization losses too, e.g. to limit the number of floating point operations used by the model.
3.3.3 Training
As stated above, the main model is trained in a regular way using training set examples, where sampled softmax loss is used for retrieval problems and cross entropy loss is used for ranking problems. In addition, we use validation set examples to compute rewards (Section 3.3.2), and use the A3C algorithm (a3c ) to train the controller to maximize the reward.
Warm up Phase: If we start training the controller from step , we get a vicious cycle where the Embedding Blocks not selected by the controller don’t get enough training and hence give bad rewards, resulting in them being selected even less in future. To prevent this, the first several training steps consist of a warmup phase where we train all the Embedding Blocks and leave the controller variables fixed. The controller variables are initialized randomly, so the initial controller makes approximately uniformly random choices. The warm up phase ensures that all Embedding Blocks get some training. After the warm up phase, we switch to training the main model and the controller in alternating steps using A3C.
Baseline: As part of the A3C algorithm, we use a baseline network to predict the expected reward prior to each controller choice (but using the choices that have already been made). The baseline network has the same structure as the controller network, but has its own variables, which are trained alongside the controller variables using validation set. Then we subtract the baseline from the reward at each step to compute the advantage, which is used to train the controller.
4 Experiments
We conduct experiments on two large scale recommendation problems, one for retrieval and another one for ranking; both are based on real data collected from our company’s products.
Query Suggest Retrieval Problem
This problem is to suggest the next query that the user would like to type in one of our company’s Search products, given the last query they issued. The million most commonly issued queries are used in this experiment; in other words, we want to retrieve a small set of queries that the user would like to type from these million queries. The input features to the model include full query, query unigrams, bigrams and trigrams from the previous query. We used SE to represent each of the features, and the SEs are concatenated and fed into the representation learning component of the model, which contains
fully connected hidden layers with ReLU activation function. The output layer is a softmax layer with
million neurons, each of which is associated with a unique query from the label query vocabulary. Our total memory budget is . For the baseline, we tried different combinations of and such that , using a embedding for each feature. We used the best performing model as our baseline.To study the performance of NIS, we used it to find the optimal SE for each of the features with the same total memory budget . We also constructed a model with MEs being used as replacement of SEs, while the rest of the model (i.e. the representation learning component and the output layer) is the same. NIS is used to find the optimal MEs for all features. Sampled Recall@1 is used as objective for the controller, where negative examples are sampled from the million vocabulary.
App Install Ranking Problem
This problem aims at ranking a set of Apps based on the likelihood they will be installed, where the data comes from one of our company’s App store products. This dataset consists (Context, App, Label) tuples, where the Label is either or , indicating if the App is installed or not. A total of discrete features are used to represent the Context and App, such as App ID, Developer ID, App Title, etc. The vocabulary size of the discrete features varies from hundreds to millions. Similar to the retrieval problem, SEs of the features are concatenated and fed into fully connected layers. Cross entropy loss is used for this ranking problem. For this problem, the App store product already constructed a highly optimized baseline with the corresponding vocabulary size and embedding dimension for each SE. We used the same configuration as our baseline model.
Similar to the retrieval problem, we used NIS to find the optimal SEs given the same memory budget as the baseline model. Moreover, a second model with all SEs being replaced by MEs are constructed, and we again used NIS to find the optimal MEs for all features. Here, the objective for the controller is ROCAUC, where each controller decision is applied to validation set examples, and the ROCAUC is calculated from these examples.
In all our experiments, Embedding Blocks are constructed for each feature, with ’s being , where is the total vocabulary size of the feature, and ’s being , where , a heuristic value that works well in practice. Here is the ceiling operator. Note that there is nothing prevent setting to a larger value and discretize it into more buckets, if there is doubt about the effectiveness of the heuristically selected . Each data set is split into , and for training, validation and testing.
Model  Cost (Million Floats)  Recall@1 (%)  Recall@5 (%) 

Baseline  2560  16.0  24.2 
NISSE  2560  16.5  24.3 
NISME  2560  17.1  25.7 
Model  Cost (Million Floats)  AUC (%) 

Baseline  12  72.2 
NISSE  12  73.0 
NISME  12  73.5 
We report the experimental results for these two problems in Table 1 and Table 2. It can be seen that the SEs searched by our NIS approach outperforms the baseline model in both of the two problems, evidenting that NIS is able to automatically find much better vocabulary size and embedding dimension in SE setting, comparing to the approach of choosing these hyperparameters heuristically. Moreover, both of the baselines involve training one model from scratch for each candidate SE configuaration, which is computationally very expensive. In comparison, all our optimal SEs are found in only one training run, which is a much more efficient approach.
In addition, the sophistication of MEs make it difficult to configure MEs manually for each feature. Our experimental results show that the MEs automatically searched by our NIS approach even outperformed the optimal SEs. Compare to the baseline, our approach achieves and relative improvement on Recall@1 and Recall@5 for the retrieval problem, and relative improvement on ROCAUC for the ranking problem. This not only empirically evidented that MEs are more efficient representations of discrete features than SEs within a memory budget, but also demonstrated that NIS is an efficient approach for finding the optimal MEs that result in superior perfomance than manually configured vocabulary sizes and embedding dimensions.
5 Conclusion
We presented Neural Input Search (NIS), a technique for automatically searching the optimal vocabulary and embedding sizes in the input component of a model. We also introduced Multisize Embedding (ME), a novel type of embedding that achieves high coverage of tail items while keeping accurate representation for head items. We demonstrated the effectiveness of NIS and ME with experiments on large scale retrieval and ranking problems. Our approach received a relative improvement of on Recall@1 and on ROCAUC in only one training run, without increasing the total number of parameters in the model.
References
 [1] T. Bansal, D. Belanger, and A. McCallum. Ask the gru: Multitask learning for deep text recommendations. In Proceedings of the 10th ACM Conference on Recommender Systems, RecSys ’16, pages 107–114, New York, NY, USA, 2016. ACM.

[2]
G. Bender, P.J. Kindermans, B. Zoph, V. Vasudevan, and Q. Le.
Understanding and simplifying oneshot architecture search.
In J. Dy and A. Krause, editors,
Proceedings of the 35th International Conference on Machine Learning
, volume 80 of Proceedings of Machine Learning Research, pages 550–559, Stockholmsmässan, Stockholm Sweden, 10–15 Jul 2018.  [3] A. Brock, T. Lim, J. Ritchie, and N. Weston. SMASH: Oneshot model architecture search through hypernetworks. In International Conference on Learning Representations, 2018.
 [4] H. Cai, J. Yang, W. Zhang, S. Han, and Y. Yu. Pathlevel network transformation for efficient architecture search. In J. Dy and A. Krause, editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 678–687, Stockholmsmässan, Stockholm Sweden, 10–15 Jul 2018.
 [5] H. Cai, L. Zhu, and S. Han. ProxylessNAS: Direct neural architecture search on target task and hardware. In International Conference on Learning Representations, 2019.

[6]
H.T. Cheng, L. Koc, J. Harmsen, T. Shaked, T. Chandra, H. Aradhye,
G. Anderson, G. Corrado, W. Chai, M. Ispir, R. Anil, Z. Haque, L. Hong,
V. Jain, X. Liu, and H. Shah.
Wide & deep learning for recommender systems.
In Proceedings of the 1st Workshop on Deep Learning for Recommender Systems, DLRS 2016, pages 7–10, New York, NY, USA, 2016. ACM.  [7] P. Covington, J. Adams, and E. Sargin. Deep neural networks for youtube recommendations. In Proceedings of the 10th ACM Conference on Recommender Systems, RecSys ’16, pages 191–198, New York, NY, USA, 2016. ACM.

[8]
T. Donkers, B. Loepp, and J. Ziegler.
Sequential userbased recurrent neural network recommendations.
In Proceedings of the Eleventh ACM Conference on Recommender Systems, RecSys ’17, pages 152–160, New York, NY, USA, 2017. ACM.  [9] C. A. GomezUribe and N. Hunt. The netflix recommender system: Algorithms, business value, and innovation. ACM Trans. Manage. Inf. Syst., 6(4):13:1–13:19, Dec. 2015.
 [10] D. Kim, C. Park, J. Oh, S. Lee, and H. Yu. Convolutional matrix factorization for document contextaware recommendation. In Proceedings of the 10th ACM Conference on Recommender Systems, RecSys ’16, pages 233–240, New York, NY, USA, 2016. ACM.

[11]
C. Liu, B. Zoph, M. Neumann, J. Shlens, W. Hua, L.J. Li, L. FeiFei,
A. Yuille, J. Huang, and K. Murphy.
Progressive neural architecture search.
In
The European Conference on Computer Vision (ECCV)
, September 2018.  [12] H. Liu, K. Simonyan, and Y. Yang. DARTS: Differentiable architecture search. In International Conference on Learning Representations, 2019.
 [13] R. Luo, F. Tian, T. Qin, E. Chen, and T.Y. Liu. Neural architecture optimization. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. CesaBianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems 31, pages 7816–7827. Curran Associates, Inc., 2018.
 [14] V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In M. F. Balcan and K. Q. Weinberger, editors, Proceedings of The 33rd International Conference on Machine Learning, volume 48 of Proceedings of Machine Learning Research, pages 1928–1937, New York, New York, USA, 20–22 Jun 2016. PMLR.
 [15] H. Pham, M. Guan, B. Zoph, Q. Le, and J. Dean. Efficient neural architecture search via parameters sharing. In J. Dy and A. Krause, editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 4095–4104, Stockholmsmässan, Stockholm Sweden, 10–15 Jul 2018. PMLR.
 [16] E. Real, A. Aggarwal, Y. Huang, and Q. V. Le. Regularized evolution for image classifier architecture search. CoRR, abs/1802.01548, 2018.
 [17] M. Tan, B. Chen, R. Pang, V. Vasudevan, and Q. V. Le. Mnasnet: Platformaware neural architecture search for mobile. CoRR, abs/1807.11626, 2018.
 [18] A. van den Oord, S. Dieleman, and B. Schrauwen. Deep contentbased music recommendation. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 26, pages 2643–2651. Curran Associates, Inc., 2013.
 [19] S. Xie, H. Zheng, C. Liu, and L. Lin. SNAS: stochastic neural architecture search. In International Conference on Learning Representations, 2019.

[20]
Z. Zhong, J. Yan, W. Wu, J. Shao, and C.L. Liu.
Practical blockwise neural network architecture generation.
In
The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
, June 2018.  [21] B. Zoph and Q. V. Le. Neural architecture search with reinforcement learning. In International Conference on Learning Representations, 2017.
 [22] B. Zoph, V. Vasudevan, J. Shlens, and Q. V. Le. Learning transferable architectures for scalable image recognition. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
Comments
There are no comments yet.