1 Introduction
Graphs are a ubiquitous structure and are widely used in reallife problems, e.g. computational biology (Zitnik and Leskovec, 2017), social networks (Hamilton et al., 2017)
(Lin et al., 2015), etc.Graph Neural Networks (GNNs) follow a message passing (node aggregation) scheme to gradually propagate information from adjacent nodes at every layer. However, due to the varieties of nonEuclidean data structures, GNNs tend to be less adaptive than traditional convolutional neural networks, thus it is common to retune the network architecture for each new dataset. For instance, GraphSage
(Hamilton et al., 2017) shows networks are sensitive to the number of hidden units on different datasets; jumping knowledge networks demonstrate that the optimal concatenation strategy between layers varies for different datasets (Xu et al., 2018). Furthermore, the challenge of designing a new GNN architecture typically involves a considerably larger design space. A single graph block normally comprises multiple connecting subblocks, such as linear layers, aggregation, attention, etc., each subblock can have multiple candidate operations, and thus provides a large combinatorial architecture search space. The formidable search space and the lack of transferability of GNN architectures present a great challenge in deploying GNNs rapidly to various reallife scenarios.Recent advances in neural network architecture search (NAS) methods show promising results on convolutional neural networks and recurrent neural networks
(Zoph and Le, 2017; Liu et al., 2019; Casale et al., 2019). NAS methods are also applicable to graph data, recent work uses NAS based on reinforcement learning (RL) for GNNs and achieves stateoftheart accuracy results
(Gao et al., 2019; Zhou et al., 2019). RLbased NAS, however, has the following shortcomings. First, RL requires a full train and evaluate cycle for each architecture that is considered; making it computational expensive (Casale et al., 2019). Second, existing GNN search methods focus only on the microarchitecture. For instance, only the activation function, aggregation method, hidden unit size,
etc.of each graph convolutional block are considered in the search. However, it has been observed that performance can be improved if shortcut connections, similar to residual connections in CNNs
(He et al., 2016), are added to adapt neighborhood ranges for a better structureaware representation (Xu et al., 2018). This macroarchitecture configuration of how blocks connect to each other via shortcuts is not considered in previous Graph NAS methods.To address these shortcomings we propose a probabilistic dual architecture search. Instead of evaluating child networks from a parent network iteratively using RL, we train a superset of operations with probabilistic priors generated from a NAS controller. The controller then learns the probabilistic distributions of candidate operators and picks the most effective one from the superset. For the macroarchitecture, we use the Gumbelsigmoid trick (Jang et al., 2017; Maddison et al., 2017)
to relax discrete decisions to be continuous, so that a set of continuous variables can represent the connections between graph blocks. The proposed probabilistic, gradientbased NAS framework optimises both the micro and macroarchitecture of GNNs. Furthermore, we introduce several tricks to improve both the search quality and speed. First, we design the NAS controller to produce multihot decision vectors to reduce the combinatorial microarchitecture search dimensions. Second, we use temperature annealing for Gumbelsigmoid to balance between exploration and convergence. Third, our differentiable search is singlepath, where only a single operation from the superset is evaluated during each training iteration This reduces the computation cost of NAS to the same as normal training. In short, we make the following contributions in this paper:

We propose the first probabilistic dual network architecture search (PDNAS) method for GNNs. The proposed method uses Gumbelsigmoid to relax the discrete architectural decision to be continuous for the macroarchitecture search.

To our knowledge, this is the first NAS that explores the macroarchitecture space. We demonstrate how this helps deeper GNNs to achieve stateoftheart results.

We show several tricks (multihot controller, temperature annealling and singlepath search) to improve the NAS search speed and quality.

We present the performance of the networks discovered by PDNAS and show that they achieve superior accuracy and F1 scores in comparison to other handdesigned and NASgenerated networks
2 Background
2.1 Network Architecture Search (NAS)
DNNs achieve stateoftheart results on a wide range of tasks, but tuning the architectures of DNNs on custom datasets is increasingly difficult. One challenge is the increase in the number of different possible operations that may be employed, e.g.
in the field of computer vision, simple convolutions and fully connected layers
(Krizhevsky et al., 2012) have expanded to include depthwise separable convolutions (Howard et al., 2017), grouped convolutions (Zhang et al., 2017), dilated convolutions (Yu et al., 2017), etc.. This opens up a much larger design space for neural network architectures. Network Architecture Search (NAS) seeks to automate this search for the best DNN architecture. Initially NAS methods employed reinforcementlearning (RL) (Zoph and Le, 2017; Tan et al., 2019). A recurrent neural network acts as a controller and maximises the expected accuracy of the search target on the validation dataset. However, each update of the controller requires a few hours to train a child network to convergence which significantly increases the search time. Alternatively, Liu et al. (2019)proposed Differentiable Architecture Search (DARTS) that is a purely gradientbased search; each candidate operation’s importance is scored using a trainable scalar and updated using Stochastic Gradient Descent (SGD). Subsequently,
Casale et al. (2019) approached the NAS problem from a probabilistic view, transforming concrete trainable scalars used by DARTS (Liu et al., 2019) to probabilistic priors and only train a few architectures sampled from these priors at each training iteration. Wu et al. (2019) and Xie et al. (2018)used the Gumbelsoftmax trick to relax discrete operation selection to continuous random variables. Existing NAS methods focus mainly on finding optimal operation choices inside each candidate block (microarchitecture), in our work, we extend the search to consider how blocks are interconnected,
i.e. the network’s macro architecture.2.2 NAS for GNNs
While NAS methods have been developed using image and sequence data, few recent work has applied them to graphstructured data. Gao et al. (2019) first proposed GraphNAS, a RLbased NAS on graph data. Zhou et al. (2019) used a similar RLbased approach (AutoGNN) with a constrained parameter sharing strategy. However, both of these NAS methods for graphs focus solely on the microarchitecture space — they search only which operations to apply on individual graph blocks and do not learn how large graph blocks connect to each other. Moreover, these methods are RLbased; to fully train the RL controller, they require many iterations of child network training to convergence.
In this work we focus on GNNs applied to node classification tasks based on MessagePassing Neural Networks (Gilmer et al., 2017). Most of the manually designed architectures proposed for these tasks fall into this category, such as GCN (Kipf and Welling, 2016), GAT (Veličković et al., 2018), LGCN (Gao et al., 2018) and GraphSage (Hamilton et al., 2017).
3 Method
denotes controlleroutput probabilities for operators within each graph block.
denotes probabilities of shortcut connections between graph blocks. (b) Within a graph block, controls which operators are used for different types of operations. (c) are gating functions . Solid lines are input streams (denoted as ) into the router while dashed lines are output streams (denoted as ). .Figure 0(a) shows an overview of PDNAS. In this framework, we formulate the search space for GNN as a stack of Graph Blocks (Figure 0(a)), with shortcut connections allowing information to skip an arbitrary number of blocks, similar to DenseNet (Huang et al., 2017). A Graph Block is essentially a GNN layer composed from four subblocks, including a linear layer, an attention layer, an aggregation and an activation function (Figure 0(b)). Each subblock has a set of candidate operations to search over. A NAS controller determines which operators are active in these subblocks during each training iteration. While searching in the microarchitecture space of operations within Graph Blocks, PDNAS also searches in the macroarchitecture space of shortcut connections(Figure 0(c)). Shortcut connections are controlled by gating functions parameterised by a routing probability matrix. We discuss the microarchitecture search of a Graph Block in Section 3.1, the macroarchitecture search of shortcut connection routing in Section 3.2 and dual optimisation of architectural parameters in Section 3.3.
3.1 MicroArchitecture Search
In this work, we consider GNNs based on the messagepassing mechanism. In each GNN layer, nodes aggregate attention weighted messages from their neighbours and combine these messages with their own feature. Formally, each GNN layer can be described as:
(1) 
Here is a transformation operation for features. In GNNs,
is typically a linear transformation in the form of
. is the set of neighbouring nodes of node . are the attention parameters for messages passed from neighbouring nodes. is an aggregation operation for the messages received. is an operation for combining aggregated messages with features of the current node. is a nonlinear activation. For each of the above operations, there are several candidates to search amongst. In this work, we consider the following microarchitecture search space:Attention Type  Equation 

Const  
GCN  
GAT  
SymGAT  
COS  
Linear  
GeneLinear 

Transformation function: we formulate as where and . and are the input and output dimensions for respectively, and is the expansion dimension, which is similar to Tan et al. (2019). We let be multiples of . The search space for is thus . While it is possible to search for the output dimension of , this incurs large memory costs because a quadratic number of candidate operators are needed for each layer. Let the number of candidate hidden dimensions be . For each layer we will have to use candidate operators for each layer to map input dimensions to output dimensions. In our setup, we thus leave input dimensions as hyperparameters determined via a grid search.

Attention mechanism: attention parameter is computed by attention functions that may depend on the features of the current node and its neighbours. While any attention function can be included in the set of candidate operators, we use attention functions that appear in GraphNAS (Gao et al., 2019) and AGNN (Zhou et al., 2019) to make a fair comparison. Table 1 lists all attention functions considered in our search.

Attention head: multihead attention (Vaswani et al., 2017) means that multiple attention heads are used and computed in parallel. For PDNAS, the numbers of heads searched are .

Aggregation function: messages from neighbouring node are aggregated by the function . In our experiments, we include three widelyused options: .

Combine function: aggregated neighbouring messages are combined into the current node feature with the function . We examined two options, and . Here, is simply the addition of aggregated messages from neighbouring nodes to current node feature, while
concatenates aggregated messages with node feature and then processes the result with a multilayer perceptron (MLP). In practice, we found that
consistently outperforms the other, and thus removed the search for the combine function in our final implementation. 
Activation function: the final output of a GNN layer passes through a nonlinear activation function . The candidate functions for
include “None”, “Sigmoid”, “Tanh”, “Softplus”, “ReLU”, “LeakyReLU”, “ReLU6” and “ELU”. Please refer to Appendix A for details of each activation function.
A Graph Block is similar to a cell employed in the CNN NAS algorithm DARTS (Liu et al., 2019). DARTS uses a weighted sum to combine outputs of all candidate operators. In PDNAS, we use the function, which allows only one candidate operator to be active in each training iteration. Let be the subblock in Graph Block , and be the candidate operator for . is then computed as:
(2) 
Here, is the probability of the candidate operator of subblock and layer assigned by the NAS Controller. This hardmax approach considerably reduces memory and computational cost since only one operation is active at any training iterations, whilst still converges, as shown by Wu et al. (2019) and Xie et al. (2018). While the function is nondifferentiable, we use a differentiable approximation which ensures that the controller receives learning signals. The operator selection is implemented by casting as a onehot vector to select from the outputs of each candidate operators. We multiply this vector () with to allow gradients to be backpropagated through to the controller. This is the same as adding winnertakesall to the softmaxweighted summation used in DARTS (Liu et al., 2019), and also known as singlepath NAS. In a singlepath NAS, only the winning operation is evaluated during each training iteration, the forward and backward passes through the unselected operators are thus not evaluated. It in turn reduces the computational and memory costs of each iteration to the same as normal training.
NAS Controller: Figure 2 illustrates the design of our microarchitecture search controller. The controller is conditioned on two possible inputs, which are a trainable prior vector and a graph embedding produced by the graph summarisation module. Graph summarisation module, as its name suggests, summarises the whole graph into a single vector embedding containing the entire dataset statistics. In this work, we use a simple module with two GCN layers (Kipf and Welling, 2016) and two pooling layers after each GCN. The first pooling layer is a selfattention pooling layer (Lee et al., 2019) while the last layer is a global average pooling. The graph summarisation module allows the NAS controller to be conditioned on input data. We found that the performance improvement provided by a graph summarisation path in the controller is minimal (), but it caused considerable additional computational and memory costs. We then make this conditioning on data optional, and report experiment results without this branch of data conditioning.
We combine and by , thereby treating as a trainable bias to the data statistics. The final part of the NAS controller is an MLP, which computes vectors for every possible subblock in each of the layers. Each vector is passed through a softmax function to produce a probability vector that controls which operator is active with the function described in equation 2. This multihot vector approach reduces parameters of the controller considerably compared to the onehot vector approach where each wholearchitecture configuration is represented as a separate entry in the output vector. The onehot approach will have an output layer of size complexity , whereas our multihot approach only requires an output layer of size ^{1}^{1}1For clarity, we assume the number of candidate operators remain the same across subblocks in each layer. .
In initial experiments, we found that when selecting the attention mechanism, the NAS controller usually converges to operators that do not have trainable parameters, such as GCN’s normalised message weighting and constant attention. We hypothesise this is because operators with trainable parameters, such as GAT, take many training iterations to achieve similar performance to parameterless operators like GCN. The controller thus at the start of training greedily converge to these parameterless operators due to a faster improvement in performance. To enforce more “exploration”, we add noise to the probability distribution for operators generated by our NAS controller at the start of the searching, and gradually anneal the noise to 0. Specifically the noiseadded probability vector
is computed as:(3) 
Here,
is a uniform distribution to sample noise from,
is the temperature which decreases during search to anneal noise, and is a normalising factor to ensure is still a valid probability distribution. While it is possible to use GumbelSoftmax (Jang et al., 2017; Maddison et al., 2017)to achieve the same goal. In practice, we found that the controller greedily increase the scale of logits inputs, making the Gumbel noise too small to make any effect. Thus we enforce inputs and the noise to be at the same numerical scale using
Equation 3.3.2 Macroarchitecture Search
The macroarchitecture search determines how graph blocks connect to each other, in this case, we call them shortcut connections following the naming conventions in computer vision (Huang et al., 2017). As mentioned earlier, shortcut connections on graph data have been explored in Jumping Knowledge networks (Xu et al., 2018).
We define to be a square matrix of trainable priors for shortcut connections, and is the number of possible graph blocks. Additionally, denotes a collection of the probabilities of connection between the inputs and outputs of graph blocks through shortcut connections, and has the same dimension as . In addition, cyclic connections are not permitted. is a collection of inputs, where represents a single graph input from a previous layer and . Similarly, is a collection of output graphs; these are the original outputs of graph blocks. With being a single graph output, we have ; has the same dimension as , and it is the combination between shortcut connections and the original outputs. For producing the probabilities of shortcut connections from trainable priors , we apply the GumbelSigmoid trick (Jang et al., 2017; Maddison et al., 2017) (denoted as ) on each individual element of
so as to approximate discrete sampling from a binomial distribution. GumbelSigmoid has the form of:
(4) 
where is noise sampled from the Gumbel distribution , and is the temperature controlling the randomness for the Gumbel statistics. As decreases, samples values that are more ‘discrete’, meaning that values are closer to extreme boundary of 0 and 1, reduces the matrix by summing all row elements, and is the elementwise product between matrices.
(5) 
Here, is a collection of shortcut connections, which is simply a fully connected layer that transforms the hidden unit size. In addition is an upper triangular matrix because shortcuts are forward connections — no graph blocks can connect backwards:
(6) 
We also have the following probability matrix , note that this is an upper triangular matrix with each if . This means input cannot connect back to preceding :
(7) 
For the
(Gumbelsigmoid) function, we anneal the temperature to balance between random choices and concrete discrete decisions. With
being the number of training epochs,
being the maximum number of epochs, being a constant and is the starting epoch; we use the following annealing strategy, where in practice, we set , :(8) 
3.3 Dual Optimisation
We formulate PDNAS as a bilevel optimisation problem, similar to DARTS (Liu et al., 2019):
(9) 
Here are the parameters of all candidate operators, is the optimal parameters given , where represents parameters of the microarchitecture search controller and the trainable routing matrices . is a training loss on the training data split, while is validation loss on the validation data split. The parameters and are trained iteratively with their own gradient descent optimisers. Since it is computationally intractable to compute for each update of , we approximate with a few training steps, which are shown to be effective in DARTS (Liu et al., 2019), gradientbased hyperparameter tuning (Luketina et al., 2016) and unrolled Generative Adversarial Network training (Metz et al., 2016). The full procedure is shown in Algorithm 1. Here is input data, is label, is the maximum number of search iterations, and is the number of training steps to approximate . In each search iteration, we first sample noise for the controller (recall this noise is to encourage more exploration at the start of training), and then compute probabilities of the candidate operators and indices of operators with the highest probabilities . We then approximate in steps. In the training steps, of operators receives gradients from the optimiser using the training loss . Next we update both sets of architectural parameters (controller and router parameters), and , with respect to the validation loss . Note here is changed to to provide gradients to the controller, as discussed in Section 3.1. In practice we use the Adam optimiser (Kingma and Ba, 2014), noted as .
4 Results
We implemented PDNAS using PyTorch
(Paszke et al., 2019). Operations in Graph Blocks are modified from the GNN implementations in PyTorch Geometric (PyG) (Fey and Lenssen, 2019). For the Cora dataset, to ensure a consistent comparison to GraphNAS (Gao et al., 2019), we used the data splits provided by the Deep Graph Library (Wang et al., 2019). The data splits from all other datasets are from PyG. For all search and training, we used a single Nvidia Tesla V100 GPU unless specified otherwise. We evaluated PDNAS on two learning settings, namely transductive and inductive settings. For the transductive setting we used the citation graph datasets (Sen et al., 2008) including Cora, Citeseer and PubMed. For the inductive setting, we considered the ProteinProtein Interaction (PPI) dataset (Zitnik and Leskovec, 2017). In addition, we provide an evaluation of the citation datasets in a fully supervised setting, similar to Xu et al. (2018).4.1 Citation Datasets
For Citation datasets, we conducted the experiment with two widelyused settings with the former according to Yang et al. (2016) and the latter according to Xu et al. (2018). In this section, we describe the results for both settings.
In the first setting, training data only contains 20 labelled nodes for each category in the dataset. Validation data contains 500 nodes, while test data contains 1000 nodes. We used a learning rate of for model parameters and for architectural parameters , and ran search for 400 epochs. In Table 2, we present the results of PDNAS for this setting in comparison to graph attention networks (GAT) (Veličković et al., 2018), GraphNAS (Gao et al., 2019) and AGNN (Zhou et al., 2019). The results demonstrate that PDNAS outperforms all existing methods on Cora and PubMed, however, is lower on Citeseer compared to AGNN. In addition to accuracy, we also measured the search wall clock times of GraphNAS (Gao et al., 2019)
using their open sourced code
^{2}^{2}2https://github.com/GraphNAS/GraphNAS. Unfortunately AGNN (Zhou et al., 2019) does not have an open source implementation, nor reports wall clock times, making it impossible to compare against. Table 3 shows wall clock times used for searching with GraphNAS and PDNAS. The comparison is conducted with exactly the same software and hardware environments. Time used for PDNAS takes into account of hyperparameter search of hidden layer sizes, as discussed in Section 3.1. We see that PDNAS is more than two times faster than GraphNAS for finding the best GNN architecture.Methods  Cora  CiteSeer  PubMed 

GAT  
GraphNAS  
AGNN  
PDNAS 
Methods  Cora  CiteSeer  PubMed 

GraphNAS  11323  16333  26174 
PDNAS  5012  6044  10634 
Model  Layers  Cora  PubMed  Citeseer  
Accuracy  Size  Accuracy  Size  Accuracy  Size  
JKNet32  2  48.30K  17.67K  120.74K  
3  49.35K  18.72K  121.80K  
4  50.41K  19.78K  122.85K  
5  51.46K  20.84K  123.91K  
6  52.52K  21.89K  124.97K  
7  53.58K  22.95K  126.02K  
JKNet64  2  98.63K  37.38K  243.53K  
3  102.79K  41.54K  247.69K  
4  106.95K  45.70K  251.85K  
5  111.11K  49.86K  256.01K  
6  115.27K  54.02K  260.17K  
7  119.43K  58.18K  264.33K  
PDNAS  2  48.06K  18.21K  119.65K  
3  50.22K  20.35K  123.59K  
4  51.29K  25.67K  125.00K  
5  57.66K  29.58K  129.97K  
6  61.93K  32.43K  131.40K  
7  68.65K  42.31K  141.64K  

Model/Method  Type  Layers  F1 Score  Size 
GAT  HandDesigned  3  0.89M  
LGCN  HandDesigned  2  0.85M  
JKNetConcat (Xu et al., 2018)  HandDesigned  2    
JKNetLSTM (Xu et al., 2018)  HandDesigned  3    
JKNetDenseLSTM (Xu et al., 2018)  HandDesigned  3    
GraphNAS (Gao et al., 2019)  Reinforcement Learning  3  3.95M  
GraphNAS with sc (Gao et al., 2019)  Reinforcement Learning  3  2.11M  
AGNN (Zhou et al., 2019)  Reinforcement Learning  3  4.60M  
AGNN with sharing (Zhou et al., 2019)  Reinforcement Learning  3  1.60M  
PDNAS  GradientBased  4  2.39M  

It is worth mentioning that the original data splits on the citation datasets are not suitable for training deeper graph networks; the number of available training nodes is significantly smaller than both validation and testing. In other words, the search for the best network architecture with limited number of training samples becomes an optimisation focusing on microarchitectures. Deeper networks are not applicable on such datasets since overfitting occurs easily with a small number of training samples.
To overcome the issue of the original unfair data splits, in the second setting, we randomly repartitioned the datasets into , , for training, validation and testing respectively. The random partition remains the same for all different networks examined in Table 4. It is notable that Xu et al. (2018) also repartitioned their data to the same , , split, however, due to the unavailability of their data split masks, we chose to reimplement their networks on our own random split. Table 4 shows a comparison between manuallydesigned jumping knowledge networks (JKNets) (Xu et al., 2018) and our searched networks on the citation network datasets (Yang et al., 2016) (Cora, Pubmed and Citeseer). Since the original JKNet can have varying numbers of channels at each layer of the network, we implemented two versions with 32 channels and 64 channels for each layer of the network respectively. For both our search method and JKNets, we sweep the number of layers from to . For each accuracy number reported in Table 4, it is averaged across
independent runs; in addition, the standard deviation among
runs is also reported. In practice, for searched networks, the network sizes for multiple independent runs only vary slightly and thus are not shown here for the ease of presentation. The results in Table 4 suggest our searched networks outperformed JKNet by a significant margin. For the best performing configuration on each model, we observed increases of , and in the average accuracy on Cora, Pubmed and Citeseer respectively (numbers in bold). For both Cora and Pubmed, the best performing searched networks are at a higher layer count compared to JKNets, demonstrating our search algorithm is efficient at finding deeper networks.4.2 PPI dataset
Table 5 shows a comparison among several handdesigned networks and various NAS results on the PPI dataset (Zitnik and Leskovec, 2017). The networks include Graph Attention Networks (GAT) (Veličković et al., 2018), learnable graph convolutional networks (LGCN) (Gao et al., 2018), and jumping knowledge networks (JKNet) (Xu et al., 2018). Jumping knowledge networks did not report the size and the original code base is not available, so we do not report their sizes. For the network architecture search results, we compare to GraphNAS (Gao et al., 2019) and AutoGNN (Zhou et al., 2019). Both of these NAS methods are RLbased and do not support searching on a macroarchitecture level. As a result, our search method finds a deeper network with the highest F1 score in comparison to the other NAS methods. PDNAS outperforms the best handdesigned network and NAS network by and respectively.
5 Conclusion
In this paper we provide evidence that a differentiable and dualarchitecture approach to NAS can outperform current NAS approaches applied to GNNs, both in terms of speed and search quality. The microarchitecture design space is searched using a pure gradientbased approach and search complexity is reduced using a multihot NAS controller. In addition, for the first time, NAS is extended to consider the network’s macroarchitecture using a differentiable routing mechanism.
References
 Probabilistic neural architecture search. arXiv preprint arXiv:1902.05116. Cited by: §1, §2.1.
 Fast graph representation learning with PyTorch Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds, Cited by: §4.
 Largescale learnable graph convolutional networks. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 1416–1424. Cited by: §2.2, §4.2.
 GraphNAS: graph neural architecture search with reinforcement learning. arXiv preprint arXiv:1904.09981. Cited by: §1, §2.2, 2nd item, §4.1, §4.2, Table 3, Table 5, §4.
 Neural message passing for quantum chemistry. In Proceedings of the 34th International Conference on Machine LearningVolume 70, pp. 1263–1272. Cited by: §2.2.
 Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems, pp. 1024–1034. Cited by: §1, §2.2.

Deep residual learning for image recognition.
In
Proceedings of the IEEE conference on computer vision and pattern recognition
, pp. 770–778. Cited by: §1.  Mobilenets: efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861. Cited by: §2.1.
 Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4700–4708. Cited by: §3.2, §3.
 Categorical reparameterization with gumbelsoftmax. Cited by: §1, §3.1, §3.2.
 Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §3.3.
 Semisupervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907. Cited by: §2.2, §3.1.
 ImageNet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097–1105. Cited by: §2.1.
 Selfattention graph pooling. arXiv preprint arXiv:1904.08082. Cited by: §3.1.

Learning entity and relation embeddings for knowledge graph completion.
In
Twentyninth AAAI conference on artificial intelligence
, Cited by: §1.  DARTS: differentiable architecture search. In International Conference on Learning Representations, External Links: Link Cited by: §1, §2.1, §3.1, §3.3.

Scalable gradientbased tuning of continuous regularization hyperparameters
. In International conference on machine learning, pp. 2952–2960. Cited by: §3.3. 
The concrete distribution: a continuous relaxation of discrete random variables
. Cited by: §1, §3.1, §3.2.  Unrolled generative adversarial networks. External Links: 1611.02163 Cited by: §3.3.

PyTorch: an imperative style, highperformance deep learning library
. In Advances in Neural Information Processing Systems, pp. 8024–8035. Cited by: §4.  Collective classification in network data. AI magazine 29 (3), pp. 93–93. Cited by: §4.
 MNASNet: platformaware neural architecture search for mobile. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2820–2828. Cited by: §2.1, 1st item.
 Attention is all you need. In Advances in neural information processing systems, pp. 5998–6008. Cited by: 3rd item.
 Graph Attention Networks. International Conference on Learning Representations. External Links: Link Cited by: §2.2, §4.1, §4.2.
 Deep graph library: towards efficient and scalable deep learning on graphs. ICLR Workshop on Representation Learning on Graphs and Manifolds. Cited by: §4.
 FBNET: hardwareaware efficient convnet design via differentiable neural architecture search. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 10734–10742. Cited by: §2.1, §3.1.
 SNAS: stochastic neural architecture search. arXiv preprint arXiv:1812.09926. Cited by: §2.1, §3.1.
 Representation learning on graphs with jumping knowledge networks. In International Conference on Machine Learning, pp. 5449–5458. Cited by: §1, §1, §3.2, §4.1, §4.1, §4.2, Table 5, §4.

Revisiting semisupervised learning with graph embeddings
. In Proceedings of the 33rd International Conference on International Conference on Machine LearningVolume 48, pp. 40–48. Cited by: §4.1, §4.1, Table 2.  Dilated residual networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 472–480. Cited by: §2.1.
 Interleaved group convolutions. In Proceedings of the IEEE International Conference on Computer Vision, pp. 4373–4382. Cited by: §2.1.
 AutoGNN: neural architecture search of graph neural networks. arXiv preprint arXiv:1909.03184. Cited by: §1, §2.2, 2nd item, §4.1, §4.2, Table 5.
 Predicting multicellular function through multilayer tissue networks. Bioinformatics 33 (14), pp. i190–i198. Cited by: §1, §4.2, §4.
 Neural architecture search with reinforcement learning. Cited by: §1, §2.1.
Comments
There are no comments yet.