Probabilistic Dual Network Architecture Search on Graphs

by   Yiren Zhao, et al.

We present the first differentiable Network Architecture Search (NAS) for Graph Neural Networks (GNNs). GNNs show promising performance on a wide range of tasks, but require a large amount of architecture engineering. First, graphs are inherently a non-Euclidean and sophisticated data structure, leading to poor adaptivity of GNN architectures across different datasets. Second, a typical graph block contains numerous different components, such as aggregation and attention, generating a large combinatorial search space. To counter these problems, we propose a Probabilistic Dual Network Architecture Search (PDNAS) framework for GNNs. PDNAS not only optimises the operations within a single graph block (micro-architecture), but also considers how these blocks should be connected to each other (macro-architecture). The dual architecture (micro- and marco-architectures) optimisation allows PDNAS to find deeper GNNs on diverse datasets with better performance compared to other graph NAS methods. Moreover, we use a fully gradient-based search approach to update architectural parameters, making it the first differentiable graph NAS method. PDNAS outperforms existing hand-designed GNNs and NAS results, for example, on the PPI dataset, PDNAS beats its best competitors by 1.67 and 0.17 in F1 scores.



There are no comments yet.


page 1

page 2

page 3

page 4


Learned Low Precision Graph Neural Networks

Deep Graph Neural Networks (GNNs) show promising performance on a range ...

Edge-featured Graph Neural Architecture Search

Graph neural networks (GNNs) have been successfully applied to learning ...

Pooling Architecture Search for Graph Classification

Graph classification is an important problem with applications across ma...

GraphPAS: Parallel Architecture Search for Graph Neural Networks

Graph neural architecture search has received a lot of attention as Grap...

Graph Neural Network Architecture Search for Molecular Property Prediction

Predicting the properties of a molecule from its structure is a challeng...

Learn Layer-wise Connections in Graph Neural Networks

In recent years, Graph Neural Networks (GNNs) have shown superior perfor...

A Variational-Sequential Graph Autoencoder for Neural Architecture Performance Prediction

In computer vision research, the process of automating architecture engi...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Graphs are a ubiquitous structure and are widely used in real-life problems, e.g. computational biology (Zitnik and Leskovec, 2017), social networks (Hamilton et al., 2017)

, knowledge graphs

(Lin et al., 2015), etc.

Graph Neural Networks (GNNs) follow a message passing (node aggregation) scheme to gradually propagate information from adjacent nodes at every layer. However, due to the varieties of non-Euclidean data structures, GNNs tend to be less adaptive than traditional convolutional neural networks, thus it is common to re-tune the network architecture for each new dataset. For instance, GraphSage

(Hamilton et al., 2017) shows networks are sensitive to the number of hidden units on different datasets; jumping knowledge networks demonstrate that the optimal concatenation strategy between layers varies for different datasets (Xu et al., 2018). Furthermore, the challenge of designing a new GNN architecture typically involves a considerably larger design space. A single graph block normally comprises multiple connecting sub-blocks, such as linear layers, aggregation, attention, etc., each sub-block can have multiple candidate operations, and thus provides a large combinatorial architecture search space. The formidable search space and the lack of transferability of GNN architectures present a great challenge in deploying GNNs rapidly to various real-life scenarios.

Recent advances in neural network architecture search (NAS) methods show promising results on convolutional neural networks and recurrent neural networks

(Zoph and Le, 2017; Liu et al., 2019; Casale et al., 2019)

. NAS methods are also applicable to graph data, recent work uses NAS based on reinforcement learning (RL) for GNNs and achieves state-of-the-art accuracy results

(Gao et al., 2019; Zhou et al., 2019). RL-based NAS, however, has the following shortcomings. First, RL requires a full train and evaluate cycle for each architecture that is considered; making it computational expensive (Casale et al., 2019)

. Second, existing GNN search methods focus only on the micro-architecture. For instance, only the activation function, aggregation method, hidden unit size,


 of each graph convolutional block are considered in the search. However, it has been observed that performance can be improved if shortcut connections, similar to residual connections in CNNs

(He et al., 2016), are added to adapt neighborhood ranges for a better structure-aware representation (Xu et al., 2018). This macro-architecture configuration of how blocks connect to each other via shortcuts is not considered in previous Graph NAS methods.

To address these shortcomings we propose a probabilistic dual architecture search. Instead of evaluating child networks from a parent network iteratively using RL, we train a superset of operations with probabilistic priors generated from a NAS controller. The controller then learns the probabilistic distributions of candidate operators and picks the most effective one from the superset. For the macro-architecture, we use the Gumbel-sigmoid trick (Jang et al., 2017; Maddison et al., 2017)

to relax discrete decisions to be continuous, so that a set of continuous variables can represent the connections between graph blocks. The proposed probabilistic, gradient-based NAS framework optimises both the micro- and macro-architecture of GNNs. Furthermore, we introduce several tricks to improve both the search quality and speed. First, we design the NAS controller to produce multi-hot decision vectors to reduce the combinatorial micro-architecture search dimensions. Second, we use temperature annealing for Gumbel-sigmoid to balance between exploration and convergence. Third, our differentiable search is single-path, where only a single operation from the superset is evaluated during each training iteration This reduces the computation cost of NAS to the same as normal training. In short, we make the following contributions in this paper:

  • We propose the first probabilistic dual network architecture search (PDNAS) method for GNNs. The proposed method uses Gumbel-sigmoid to relax the discrete architectural decision to be continuous for the macro-architecture search.

  • To our knowledge, this is the first NAS that explores the macro-architecture space. We demonstrate how this helps deeper GNNs to achieve state-of-the-art results.

  • We show several tricks (multi-hot controller, temperature annealling and single-path search) to improve the NAS search speed and quality.

  • We present the performance of the networks discovered by PDNAS and show that they achieve superior accuracy and F1 scores in comparison to other hand-designed and NAS-generated networks

2 Background

2.1 Network Architecture Search (NAS)

DNNs achieve state-of-the-art results on a wide range of tasks, but tuning the architectures of DNNs on custom datasets is increasingly difficult. One challenge is the increase in the number of different possible operations that may be employed, e.g.

 in the field of computer vision, simple convolutions and fully connected layers

(Krizhevsky et al., 2012) have expanded to include depth-wise separable convolutions (Howard et al., 2017), grouped convolutions (Zhang et al., 2017), dilated convolutions (Yu et al., 2017), etc.. This opens up a much larger design space for neural network architectures. Network Architecture Search (NAS) seeks to automate this search for the best DNN architecture. Initially NAS methods employed reinforcement-learning (RL) (Zoph and Le, 2017; Tan et al., 2019). A recurrent neural network acts as a controller and maximises the expected accuracy of the search target on the validation dataset. However, each update of the controller requires a few hours to train a child network to convergence which significantly increases the search time. Alternatively, Liu et al. (2019)

proposed Differentiable Architecture Search (DARTS) that is a purely gradient-based search; each candidate operation’s importance is scored using a trainable scalar and updated using Stochastic Gradient Descent (SGD). Subsequently,

Casale et al. (2019) approached the NAS problem from a probabilistic view, transforming concrete trainable scalars used by DARTS (Liu et al., 2019) to probabilistic priors and only train a few architectures sampled from these priors at each training iteration. Wu et al. (2019) and Xie et al. (2018)

used the Gumbel-softmax trick to relax discrete operation selection to continuous random variables. Existing NAS methods focus mainly on finding optimal operation choices inside each candidate block (micro-architecture), in our work, we extend the search to consider how blocks are interconnected,

i.e.  the network’s macro architecture.

2.2 NAS for GNNs

While NAS methods have been developed using image and sequence data, few recent work has applied them to graph-structured data. Gao et al. (2019) first proposed GraphNAS, a RL-based NAS on graph data. Zhou et al. (2019) used a similar RL-based approach (AutoGNN) with a constrained parameter sharing strategy. However, both of these NAS methods for graphs focus solely on the micro-architecture space — they search only which operations to apply on individual graph blocks and do not learn how large graph blocks connect to each other. Moreover, these methods are RL-based; to fully train the RL controller, they require many iterations of child network training to convergence.

In this work we focus on GNNs applied to node classification tasks based on Message-Passing Neural Networks (Gilmer et al., 2017). Most of the manually designed architectures proposed for these tasks fall into this category, such as GCN (Kipf and Welling, 2016), GAT (Veličković et al., 2018), LGCN (Gao et al., 2018) and GraphSage (Hamilton et al., 2017).

3 Method

Figure 1: (a) Overview of PDNAS.

denotes controller-output probabilities for operators within each graph block.

denotes probabilities of shortcut connections between graph blocks. (b) Within a graph block, controls which operators are used for different types of operations. (c) are gating functions . Solid lines are input streams (denoted as ) into the router while dashed lines are output streams (denoted as ). .

Figure 0(a) shows an overview of PDNAS. In this framework, we formulate the search space for GNN as a stack of Graph Blocks (Figure 0(a)), with shortcut connections allowing information to skip an arbitrary number of blocks, similar to DenseNet (Huang et al., 2017). A Graph Block is essentially a GNN layer composed from four sub-blocks, including a linear layer, an attention layer, an aggregation and an activation function (Figure 0(b)). Each sub-block has a set of candidate operations to search over. A NAS controller determines which operators are active in these sub-blocks during each training iteration. While searching in the micro-architecture space of operations within Graph Blocks, PDNAS also searches in the macro-architecture space of shortcut connections(Figure 0(c)). Shortcut connections are controlled by gating functions parameterised by a routing probability matrix. We discuss the micro-architecture search of a Graph Block in Section 3.1, the macro-architecture search of shortcut connection routing in Section 3.2 and dual optimisation of architectural parameters in Section 3.3.

3.1 Micro-Architecture Search

In this work, we consider GNNs based on the message-passing mechanism. In each GNN layer, nodes aggregate attention weighted messages from their neighbours and combine these messages with their own feature. Formally, each GNN layer can be described as:


Here is a transformation operation for features. In GNNs,

is typically a linear transformation in the form of

. is the set of neighbouring nodes of node . are the attention parameters for messages passed from neighbouring nodes. is an aggregation operation for the messages received. is an operation for combining aggregated messages with features of the current node. is a non-linear activation. For each of the above operations, there are several candidates to search amongst. In this work, we consider the following micro-architecture search space:

Attention Type Equation
Table 1: Different types of attention mechanisms. here is parameter vector for attention. is dot product, is attention for message from node to node .
  • Transformation function: we formulate as where and . and are the input and output dimensions for respectively, and is the expansion dimension, which is similar to Tan et al. (2019). We let be multiples of . The search space for is thus . While it is possible to search for the output dimension of , this incurs large memory costs because a quadratic number of candidate operators are needed for each layer. Let the number of candidate hidden dimensions be . For each layer we will have to use candidate operators for each layer to map input dimensions to output dimensions. In our setup, we thus leave input dimensions as hyper-parameters determined via a grid search.

  • Attention mechanism: attention parameter is computed by attention functions that may depend on the features of the current node and its neighbours. While any attention function can be included in the set of candidate operators, we use attention functions that appear in GraphNAS (Gao et al., 2019) and AGNN (Zhou et al., 2019) to make a fair comparison. Table 1 lists all attention functions considered in our search.

  • Attention head: multi-head attention (Vaswani et al., 2017) means that multiple attention heads are used and computed in parallel. For PDNAS, the numbers of heads searched are .

  • Aggregation function: messages from neighbouring node are aggregated by the function . In our experiments, we include three widely-used options: .

  • Combine function: aggregated neighbouring messages are combined into the current node feature with the function . We examined two options, and . Here, is simply the addition of aggregated messages from neighbouring nodes to current node feature, while

    concatenates aggregated messages with node feature and then processes the result with a multi-layer perceptron (MLP). In practice, we found that

    consistently outperforms the other, and thus removed the search for the combine function in our final implementation.

  • Activation function: the final output of a GNN layer passes through a non-linear activation function . The candidate functions for

    include “None”, “Sigmoid”, “Tanh”, “Softplus”, “ReLU”, “LeakyReLU”, “ReLU6” and “ELU”. Please refer to Appendix A for details of each activation function.

A Graph Block is similar to a cell employed in the CNN NAS algorithm DARTS (Liu et al., 2019). DARTS uses a weighted sum to combine outputs of all candidate operators. In PDNAS, we use the function, which allows only one candidate operator to be active in each training iteration. Let be the sub-block in Graph Block , and be the candidate operator for . is then computed as:


Here, is the probability of the candidate operator of sub-block and layer assigned by the NAS Controller. This hard-max approach considerably reduces memory and computational cost since only one operation is active at any training iterations, whilst still converges, as shown by Wu et al. (2019) and Xie et al. (2018). While the function is non-differentiable, we use a differentiable approximation which ensures that the controller receives learning signals. The operator selection is implemented by casting as a one-hot vector to select from the outputs of each candidate operators. We multiply this vector () with to allow gradients to be back-propagated through to the controller. This is the same as adding winner-takes-all to the softmax-weighted summation used in DARTS (Liu et al., 2019), and also known as single-path NAS. In a single-path NAS, only the winning operation is evaluated during each training iteration, the forward and backward passes through the unselected operators are thus not evaluated. It in turn reduces the computational and memory costs of each iteration to the same as normal training.

Figure 2: Micro-architecture search controller overview. Here is the trainable prior, “MLP” means Multi-Layer Perceptron. is a probability vector for operation in layer . Dashed line means an optional path.

NAS Controller: Figure 2 illustrates the design of our micro-architecture search controller. The controller is conditioned on two possible inputs, which are a trainable prior vector and a graph embedding produced by the graph summarisation module. Graph summarisation module, as its name suggests, summarises the whole graph into a single vector embedding containing the entire dataset statistics. In this work, we use a simple module with two GCN layers (Kipf and Welling, 2016) and two pooling layers after each GCN. The first pooling layer is a self-attention pooling layer (Lee et al., 2019) while the last layer is a global average pooling. The graph summarisation module allows the NAS controller to be conditioned on input data. We found that the performance improvement provided by a graph summarisation path in the controller is minimal (), but it caused considerable additional computational and memory costs. We then make this conditioning on data optional, and report experiment results without this branch of data conditioning.

We combine and by , thereby treating as a trainable bias to the data statistics. The final part of the NAS controller is an MLP, which computes vectors for every possible sub-block in each of the layers. Each vector is passed through a softmax function to produce a probability vector that controls which operator is active with the function described in equation 2. This multi-hot vector approach reduces parameters of the controller considerably compared to the one-hot vector approach where each whole-architecture configuration is represented as a separate entry in the output vector. The one-hot approach will have an output layer of size complexity , whereas our multi-hot approach only requires an output layer of size 111For clarity, we assume the number of candidate operators remain the same across sub-blocks in each layer. .

In initial experiments, we found that when selecting the attention mechanism, the NAS controller usually converges to operators that do not have trainable parameters, such as GCN’s normalised message weighting and constant attention. We hypothesise this is because operators with trainable parameters, such as GAT, take many training iterations to achieve similar performance to parameter-less operators like GCN. The controller thus at the start of training greedily converge to these parameter-less operators due to a faster improvement in performance. To enforce more “exploration”, we add noise to the probability distribution for operators generated by our NAS controller at the start of the searching, and gradually anneal the noise to 0. Specifically the noise-added probability vector

is computed as:



is a uniform distribution to sample noise from,

is the temperature which decreases during search to anneal noise, and is a normalising factor to ensure is still a valid probability distribution. While it is possible to use Gumbel-Softmax (Jang et al., 2017; Maddison et al., 2017)

to achieve the same goal. In practice, we found that the controller greedily increase the scale of logits inputs, making the Gumbel noise too small to make any effect. Thus we enforce inputs and the noise to be at the same numerical scale using

Equation 3.

3.2 Macro-architecture Search

The macro-architecture search determines how graph blocks connect to each other, in this case, we call them shortcut connections following the naming conventions in computer vision (Huang et al., 2017). As mentioned earlier, shortcut connections on graph data have been explored in Jumping Knowledge networks (Xu et al., 2018).

We define to be a square matrix of trainable priors for shortcut connections, and is the number of possible graph blocks. Additionally, denotes a collection of the probabilities of connection between the inputs and outputs of graph blocks through shortcut connections, and has the same dimension as . In addition, cyclic connections are not permitted. is a collection of inputs, where represents a single graph input from a previous layer and . Similarly, is a collection of output graphs; these are the original outputs of graph blocks. With being a single graph output, we have ; has the same dimension as , and it is the combination between shortcut connections and the original outputs. For producing the probabilities of shortcut connections from trainable priors , we apply the Gumbel-Sigmoid trick (Jang et al., 2017; Maddison et al., 2017) (denoted as ) on each individual element of

so as to approximate discrete sampling from a binomial distribution. Gumbel-Sigmoid has the form of:


where is noise sampled from the Gumbel distribution , and is the temperature controlling the randomness for the Gumbel statistics. As decreases, samples values that are more ‘discrete’, meaning that values are closer to extreme boundary of 0 and 1, reduces the matrix by summing all row elements, and is the element-wise product between matrices.


Here, is a collection of shortcut connections, which is simply a fully connected layer that transforms the hidden unit size. In addition is an upper triangular matrix because shortcuts are forward connections — no graph blocks can connect backwards:


We also have the following probability matrix , note that this is an upper triangular matrix with each if . This means input cannot connect back to preceding :


For the

(Gumbel-sigmoid) function, we anneal the temperature to balance between random choices and concrete discrete decisions. With

being the number of training epochs,

being the maximum number of epochs, being a constant and is the starting epoch; we use the following annealing strategy, where in practice, we set , :


3.3 Dual Optimisation

We formulate PDNAS as a bi-level optimisation problem, similar to DARTS (Liu et al., 2019):


Here are the parameters of all candidate operators, is the optimal parameters given , where represents parameters of the micro-architecture search controller and the trainable routing matrices . is a training loss on the training data split, while is validation loss on the validation data split. The parameters and are trained iteratively with their own gradient descent optimisers. Since it is computationally intractable to compute for each update of , we approximate with a few training steps, which are shown to be effective in DARTS (Liu et al., 2019), gradient-based hyper-parameter tuning (Luketina et al., 2016) and unrolled Generative Adversarial Network training (Metz et al., 2016). The full procedure is shown in Algorithm 1. Here is input data, is label, is the maximum number of search iterations, and is the number of training steps to approximate . In each search iteration, we first sample noise for the controller (recall this noise is to encourage more exploration at the start of training), and then compute probabilities of the candidate operators and indices of operators with the highest probabilities . We then approximate in steps. In the training steps, of operators receives gradients from the optimiser using the training loss . Next we update both sets of architectural parameters (controller and router parameters), and , with respect to the validation loss . Note here is changed to to provide gradients to the controller, as discussed in Section 3.1. In practice we use the Adam optimiser (Kingma and Ba, 2014), noted as .

  Input: , , , , ,
  for  to  do
      = TempAnneal()
      = SampleNoise()
      = Controller(,)
     for  to  do
     end for
  end for
Algorithm 1 Dual Architecture Optimisation

4 Results

We implemented PDNAS using PyTorch

(Paszke et al., 2019). Operations in Graph Blocks are modified from the GNN implementations in PyTorch Geometric (PyG) (Fey and Lenssen, 2019). For the Cora dataset, to ensure a consistent comparison to GraphNAS (Gao et al., 2019), we used the data splits provided by the Deep Graph Library (Wang et al., 2019). The data splits from all other datasets are from PyG. For all search and training, we used a single Nvidia Tesla V100 GPU unless specified otherwise. We evaluated PDNAS on two learning settings, namely transductive and inductive settings. For the transductive setting we used the citation graph datasets (Sen et al., 2008) including Cora, Citeseer and PubMed. For the inductive setting, we considered the Protein-Protein Interaction (PPI) dataset (Zitnik and Leskovec, 2017). In addition, we provide an evaluation of the citation datasets in a fully supervised setting, similar to Xu et al. (2018).

4.1 Citation Datasets

For Citation datasets, we conducted the experiment with two widely-used settings with the former according to Yang et al. (2016) and the latter according to Xu et al. (2018). In this section, we describe the results for both settings.

In the first setting, training data only contains 20 labelled nodes for each category in the dataset. Validation data contains 500 nodes, while test data contains 1000 nodes. We used a learning rate of for model parameters and for architectural parameters , and ran search for 400 epochs. In Table 2, we present the results of PDNAS for this setting in comparison to graph attention networks (GAT) (Veličković et al., 2018), GraphNAS (Gao et al., 2019) and AGNN (Zhou et al., 2019). The results demonstrate that PDNAS outperforms all existing methods on Cora and PubMed, however, is lower on Citeseer compared to AGNN. In addition to accuracy, we also measured the search wall clock times of GraphNAS (Gao et al., 2019)

using their open sourced code

222 Unfortunately AGNN (Zhou et al., 2019) does not have an open source implementation, nor reports wall clock times, making it impossible to compare against. Table 3 shows wall clock times used for searching with GraphNAS and PDNAS. The comparison is conducted with exactly the same software and hardware environments. Time used for PDNAS takes into account of hyper-parameter search of hidden layer sizes, as discussed in Section 3.1. We see that PDNAS is more than two times faster than GraphNAS for finding the best GNN architecture.

Methods Cora CiteSeer PubMed
Table 2: Accuracy comparison on Cora, Pubmed and Citeseer with data splits same as Yang et al. (2016). Our results are averaged across 3 independent runs. The numbers in bold show best accuracies.

Methods Cora CiteSeer PubMed
GraphNAS 11323 16333 26174
PDNAS 5012 6044 10634
Table 3: Comparison of wall clock time (measured in seconds) used on Cora, Pubmed and Citeseer with GraphNAS (Gao et al., 2019).

Model Layers Cora PubMed Citeseer
Accuracy Size Accuracy Size Accuracy Size
JKNet-32 2 48.30K 17.67K 120.74K
3 49.35K 18.72K 121.80K
4 50.41K 19.78K 122.85K
5 51.46K 20.84K 123.91K
6 52.52K 21.89K 124.97K
7 53.58K 22.95K 126.02K
JKNet-64 2 98.63K 37.38K 243.53K
3 102.79K 41.54K 247.69K
4 106.95K 45.70K 251.85K
5 111.11K 49.86K 256.01K
6 115.27K 54.02K 260.17K
7 119.43K 58.18K 264.33K
PDNAS 2 48.06K 18.21K 119.65K
3 50.22K 20.35K 123.59K
4 51.29K 25.67K 125.00K
5 57.66K 29.58K 129.97K
6 61.93K 32.43K 131.40K
7 68.65K 42.31K 141.64K

Table 4: Accuracy and size comparison on Cora, Pubmed and Citeseer, the data split is training, validation and testing. JKNet-n is our implementation of a jumping knowledge network with concatenation as shortcut aggregation, n represents the channel count for each layer of the network. The numbers in bold are best accuracies for each model on the targeting datasets, numbers in shades are the best on each dataset across models. All accuracies are reported as averaged values from 3 independent runs.

Model/Method Type Layers F1 Score Size
GAT Hand-Designed 3 0.89M
LGCN Hand-Designed 2 0.85M
JKNet-Concat (Xu et al., 2018) Hand-Designed 2 -
JKNet-LSTM (Xu et al., 2018) Hand-Designed 3 -
JKNet-Dense-LSTM (Xu et al., 2018) Hand-Designed 3 -
GraphNAS (Gao et al., 2019) Reinforcement Learning 3 3.95M
GraphNAS with sc (Gao et al., 2019) Reinforcement Learning 3 2.11M
AGNN (Zhou et al., 2019) Reinforcement Learning 3 4.60M
AGNN with sharing (Zhou et al., 2019) Reinforcement Learning 3 1.60M
PDNAS Gradient-Based 4 2.39M

Table 5: Accuracy and size comparison on PPI. The symbol denotes it is an implementation from Zhou et al. (2019). The numbers in bold are the best F1 score for all models on this dataset, all F1 scores are reported as averaged values from 3 independent runs.

It is worth mentioning that the original data splits on the citation datasets are not suitable for training deeper graph networks; the number of available training nodes is significantly smaller than both validation and testing. In other words, the search for the best network architecture with limited number of training samples becomes an optimisation focusing on micro-architectures. Deeper networks are not applicable on such datasets since over-fitting occurs easily with a small number of training samples.

To overcome the issue of the original unfair data splits, in the second setting, we randomly repartitioned the datasets into , , for training, validation and testing respectively. The random partition remains the same for all different networks examined in Table 4. It is notable that Xu et al. (2018) also repartitioned their data to the same , , split, however, due to the unavailability of their data split masks, we chose to reimplement their networks on our own random split. Table 4 shows a comparison between manually-designed jumping knowledge networks (JKNets) (Xu et al., 2018) and our searched networks on the citation network datasets (Yang et al., 2016) (Cora, Pubmed and Citeseer). Since the original JKNet can have varying numbers of channels at each layer of the network, we implemented two versions with 32 channels and 64 channels for each layer of the network respectively. For both our search method and JKNets, we sweep the number of layers from to . For each accuracy number reported in Table 4, it is averaged across

independent runs; in addition, the standard deviation among

runs is also reported. In practice, for searched networks, the network sizes for multiple independent runs only vary slightly and thus are not shown here for the ease of presentation. The results in Table 4 suggest our searched networks outperformed JKNet by a significant margin. For the best performing configuration on each model, we observed increases of , and in the average accuracy on Cora, Pubmed and Citeseer respectively (numbers in bold). For both Cora and Pubmed, the best performing searched networks are at a higher layer count compared to JKNets, demonstrating our search algorithm is efficient at finding deeper networks.

4.2 PPI dataset

Table 5 shows a comparison among several hand-designed networks and various NAS results on the PPI dataset (Zitnik and Leskovec, 2017). The networks include Graph Attention Networks (GAT) (Veličković et al., 2018), learnable graph convolutional networks (LGCN) (Gao et al., 2018), and jumping knowledge networks (JKNet) (Xu et al., 2018). Jumping knowledge networks did not report the size and the original code base is not available, so we do not report their sizes. For the network architecture search results, we compare to GraphNAS (Gao et al., 2019) and AutoGNN (Zhou et al., 2019). Both of these NAS methods are RL-based and do not support searching on a macro-architecture level. As a result, our search method finds a deeper network with the highest F1 score in comparison to the other NAS methods. PDNAS outperforms the best hand-designed network and NAS network by and respectively.

5 Conclusion

In this paper we provide evidence that a differentiable and dual-architecture approach to NAS can outperform current NAS approaches applied to GNNs, both in terms of speed and search quality. The micro-architecture design space is searched using a pure gradient-based approach and search complexity is reduced using a multi-hot NAS controller. In addition, for the first time, NAS is extended to consider the network’s macro-architecture using a differentiable routing mechanism.


  • F. P. Casale, J. Gordon, and N. Fusi (2019) Probabilistic neural architecture search. arXiv preprint arXiv:1902.05116. Cited by: §1, §2.1.
  • M. Fey and J. E. Lenssen (2019) Fast graph representation learning with PyTorch Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds, Cited by: §4.
  • H. Gao, Z. Wang, and S. Ji (2018) Large-scale learnable graph convolutional networks. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 1416–1424. Cited by: §2.2, §4.2.
  • Y. Gao, H. Yang, P. Zhang, C. Zhou, and Y. Hu (2019) GraphNAS: graph neural architecture search with reinforcement learning. arXiv preprint arXiv:1904.09981. Cited by: §1, §2.2, 2nd item, §4.1, §4.2, Table 3, Table 5, §4.
  • J. Gilmer, S. S. Schoenholz, P. F. Riley, O. Vinyals, and G. E. Dahl (2017) Neural message passing for quantum chemistry. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1263–1272. Cited by: §2.2.
  • W. Hamilton, Z. Ying, and J. Leskovec (2017) Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems, pp. 1024–1034. Cited by: §1, §2.2.
  • K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In

    Proceedings of the IEEE conference on computer vision and pattern recognition

    pp. 770–778. Cited by: §1.
  • A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam (2017) Mobilenets: efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861. Cited by: §2.1.
  • G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger (2017) Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4700–4708. Cited by: §3.2, §3.
  • E. Jang, S. Gu, and B. Poole (2017) Categorical reparameterization with gumbel-softmax. Cited by: §1, §3.1, §3.2.
  • D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §3.3.
  • T. N. Kipf and M. Welling (2016) Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907. Cited by: §2.2, §3.1.
  • A. Krizhevsky, I. Sutskever, and G. E. Hinton (2012) ImageNet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097–1105. Cited by: §2.1.
  • J. Lee, I. Lee, and J. Kang (2019) Self-attention graph pooling. arXiv preprint arXiv:1904.08082. Cited by: §3.1.
  • Y. Lin, Z. Liu, M. Sun, Y. Liu, and X. Zhu (2015) Learning entity and relation embeddings for knowledge graph completion. In

    Twenty-ninth AAAI conference on artificial intelligence

    Cited by: §1.
  • H. Liu, K. Simonyan, and Y. Yang (2019) DARTS: differentiable architecture search. In International Conference on Learning Representations, External Links: Link Cited by: §1, §2.1, §3.1, §3.3.
  • J. Luketina, M. Berglund, K. Greff, and T. Raiko (2016)

    Scalable gradient-based tuning of continuous regularization hyperparameters

    In International conference on machine learning, pp. 2952–2960. Cited by: §3.3.
  • C. J. Maddison, A. Mnih, and Y. W. Teh (2017)

    The concrete distribution: a continuous relaxation of discrete random variables

    Cited by: §1, §3.1, §3.2.
  • L. Metz, B. Poole, D. Pfau, and J. Sohl-Dickstein (2016) Unrolled generative adversarial networks. External Links: 1611.02163 Cited by: §3.3.
  • A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, et al. (2019)

    PyTorch: an imperative style, high-performance deep learning library

    In Advances in Neural Information Processing Systems, pp. 8024–8035. Cited by: §4.
  • P. Sen, G. Namata, M. Bilgic, L. Getoor, B. Galligher, and T. Eliassi-Rad (2008) Collective classification in network data. AI magazine 29 (3), pp. 93–93. Cited by: §4.
  • M. Tan, B. Chen, R. Pang, V. Vasudevan, M. Sandler, A. Howard, and Q. V. Le (2019) MNASNet: platform-aware neural architecture search for mobile. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2820–2828. Cited by: §2.1, 1st item.
  • A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin (2017) Attention is all you need. In Advances in neural information processing systems, pp. 5998–6008. Cited by: 3rd item.
  • P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Liò, and Y. Bengio (2018) Graph Attention Networks. International Conference on Learning Representations. External Links: Link Cited by: §2.2, §4.1, §4.2.
  • M. Wang, L. Yu, D. Zheng, Q. Gan, Y. Gai, Z. Ye, M. Li, J. Zhou, Q. Huang, C. Ma, Z. Huang, Q. Guo, H. Zhang, H. Lin, J. Zhao, J. Li, A. J. Smola, and Z. Zhang (2019) Deep graph library: towards efficient and scalable deep learning on graphs. ICLR Workshop on Representation Learning on Graphs and Manifolds. Cited by: §4.
  • B. Wu, X. Dai, P. Zhang, Y. Wang, F. Sun, Y. Wu, Y. Tian, P. Vajda, Y. Jia, and K. Keutzer (2019) FBNET: hardware-aware efficient convnet design via differentiable neural architecture search. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 10734–10742. Cited by: §2.1, §3.1.
  • S. Xie, H. Zheng, C. Liu, and L. Lin (2018) SNAS: stochastic neural architecture search. arXiv preprint arXiv:1812.09926. Cited by: §2.1, §3.1.
  • K. Xu, C. Li, Y. Tian, T. Sonobe, K. Kawarabayashi, and S. Jegelka (2018) Representation learning on graphs with jumping knowledge networks. In International Conference on Machine Learning, pp. 5449–5458. Cited by: §1, §1, §3.2, §4.1, §4.1, §4.2, Table 5, §4.
  • Z. Yang, W. W. Cohen, and R. Salakhutdinov (2016)

    Revisiting semi-supervised learning with graph embeddings

    In Proceedings of the 33rd International Conference on International Conference on Machine Learning-Volume 48, pp. 40–48. Cited by: §4.1, §4.1, Table 2.
  • F. Yu, V. Koltun, and T. Funkhouser (2017) Dilated residual networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 472–480. Cited by: §2.1.
  • T. Zhang, G. Qi, B. Xiao, and J. Wang (2017) Interleaved group convolutions. In Proceedings of the IEEE International Conference on Computer Vision, pp. 4373–4382. Cited by: §2.1.
  • K. Zhou, Q. Song, X. Huang, and X. Hu (2019) Auto-GNN: neural architecture search of graph neural networks. arXiv preprint arXiv:1909.03184. Cited by: §1, §2.2, 2nd item, §4.1, §4.2, Table 5.
  • M. Zitnik and J. Leskovec (2017) Predicting multicellular function through multi-layer tissue networks. Bioinformatics 33 (14), pp. i190–i198. Cited by: §1, §4.2, §4.
  • B. Zoph and Q. V. Le (2017) Neural architecture search with reinforcement learning. Cited by: §1, §2.1.