. This spatial/temporal structure is used to build a tensor such that the order of the feature vectors in the tensor exhibits the inherent spatial/temporal structure of the data. For instance, the nearby pixels in an image are spatially correlated or the nearby embedding vectors (in the matrix which represents a sentence) are temporally correlated. The spatial/temporal “order” of the data is effectively used by the existing neural networks. For instance, the Convolutions Neural Network (CNN) leverages the local coherence of the pixels to perform computationally efficient local feature extraction. In addition, the spatial order of the feature vectors are used to efficiently down-sample (pool) the extracted local features and perform feature aggregation. A graph can also represent the order of the feature vectors. The sequence structure in text data and the 2D structure in an image can be viewed as special instances of graphs. If each node represents a feature vector, the structure of the graph exhibits a topological order of the feature vectors. The structure of the graph can be utilized to perform local feature aggregation and non-local feature aggregation via graph down-sampling or spectral graph convolution .
However, in many applications there is not a fixed structure which can describe the given data. For instance, in some applications although each data in the dataset is represented using a graph, the graphs are not necessarily the same 
. For instance, the graphs which represent different toxic molecules are different and a toxic molecule detector should be able to handle graphs with different structures. In this paper, we focus on the graph classification task, where despite the categorical similarities, the graphs within a class might be substantially different. Since a fixed structure cannot describe all the data samples, typically deep learning based graph classification approaches are limited to extracting the local features and aggregating all the local features using an indiscriminate aggregation function such as the element-wise max/mean function[6, 2, 1]. If one aims to use the graph structure to perform non-local feature aggregation, the structure of each graph should be analyzed separately. For instance, in order to perform hierarchical feature extraction, a graph clustering method can be used on each graph individually [29, 7, 34]. This strategy however can be computationally prohibitive. The complexity of the graph clustering/pooling algorithms mostly scale with , where denotes the number of nodes.
In this paper, a new approach is presented using which the extracted local feature vectors are transformed to a latent representation which resides on a Latent Fixed Data Structure (LFDS). The different input graphs are transformed to different signals over the LFDS. Several LFDSs are introduced, including predefined structures and data driven structures. Our contributions can be summarized as follows:
We introduce an end-to-end differentiable global feature aggregation approach in which a set of unordered feature vectors are sorted/distributed on an LFDS and the structure of the LFDS is leveraged to employ CNN or GNN to aggregate the representation of the data on the LFDS.
Several predefined LFDSs including 3D tensor (image), loop, and sequence are introduced. In addition, we propose to learn the structure of the LFDS in a data driven way and the presented LFDSs are used to design multiple new global feature aggregation methods. While the computational complexity of the proposed methods are linear with the order of input graphs, they achieve competitive or better results.
Notation: We use bold-face upper-case letters to denote matrices and bold-face lower-case letters to denote vectors. Given a matrix , denotes the row of . The inner product of two row or column vectors , is denoted by . A graph with nodes is represented by two matrices and , where is the adjacency matrix, and is the matrix of feature vectors of the nodes. The operation means that the content of is set equal to the content of .
Ii Related work
. The main trend is to adopt the structure of CNNs and most of the existing graph convolution layers can be loosely divided into two main subsets: the spatial convolution layers and the spectral convolution layers. The spectral methods are based on the generalization of spectral filtering in signal processing to the signals supported over graphs. The signal residing on the graph is transformed into the spectral domain using a graph Fourier transform which is then filtered using a learnable filter’s weight vector. Finally, the signal is transformed back to the nodal domain via inverse Fourier transform[3, 5, 12, 20]. Upon defining a Fourier basis , a spectral convolution layer can be written as
where is a diagonal matrix whose diagonal values are the parameters of the filter and is an element-wise non-linear function. The basis
can be eigenvectors of the Laplacian matrix or the graph adjacency or their normalized versions. The approach proposed in promotes the spatial locality of the spectral filters by designing smooth spectral filters. The major drawback of the spectral based methods is the non-generalizability to data residing over multiple graphs which is due to the dependency on the graph basis of Laplacian.
In contrast to spectral graph convolution, the spatial graph convolution performs the convolution directly in the nodal domain and can be generalized across graphs. Similar to the convolution layer of a CNN, the spatial convolution layer aggregates the feature vector of each node with its neighbouring nodes [23, 35, 22, 29, 27, 32]. If the sum function is used to aggregate the local feature vectors, a simple spatial convolution layer can be written as
The weight matrix transforms the feature vector of the given node and transforms the feature vectors of the neighbouring nodes.
In order to obtain a global representation of the graph, the local feature vectors obtained by the convolution layers should be aggregated into a final feature vector. The element-wise max/mean function is invariant to any permutation of its input and is used widely to aggregate all the local feature vectors. Inspired by the hierarchical feature extraction in the CNNs, tools have been developed to perform non-local feature aggregation [6, 9, 19, 35, 34, 25, 26, 26, 19, 36]. In , the nodes are ranked and the ranking is used to build a sequence of nodes using a subset of the nodes. Subsequently, a 1-dimensional CNN is applied to the sequence of the nodes to perform non-local feature aggregation. However, the way that  builds the sequence of the nodes is not data driven and there is no mechanism to ensure that relevant feature vectors are placed close to each other in the sequence. The differentiable graph down-sampling method proposed in  learns an assignment matrix to downsize the input graph to a latent graph and the adjacency matrix of the latent graph is computed via down-sizing the adjacency matrix of the input graph using the computed assignment matrix. The approach proposed in [9, 19] utilized a learnable query to select a set of nodes and downsize the input graph.
Iii Global Feature Aggregation Using Latent Fixed Structures
Suppose matrix contains the input (potentially unordered) feature vectors. In this paper, is the matrix of local feature vectors which we need to aggregate them into a final representation vector. In the proposed approach, we distribute the input feature vectors on an LFDS. The LFDS is chosen such that a secondary neural network can process the feature vectors on the LFDS and aggregate them into a final feature vector. For instance, assume the LFDS is a 3D tensor (image) and suppose that we properly distribute or sort the rows of into the -dimensional rows of 3D tensor , i.e., each row of is assigned to one or several pixels of (by proper sorting/distributing we mean that the sorting/distributing algorithm respects the structure of the LFDS and places relevant feature vectors close to each other). Subsequently, we can apply a CNN to the tensor to aggregate the projection of the feature vectors on the LFDS and obtain the final representation vector. Figure 1 exhibits an overview of the presented approach and Figure 2 shows the proposed approach when the LFDS is a loop-graph. In the following sections, the details of the performed steps are explained. First, the predefined and the data driven LFDSs are introduced. Next, the sorting/distributing method is presented.
Iii-a Latent Fixed Data Structures (LDFSs)
In the recent years, significant progress has been made on the design of CNNs and GNNs. Therefore, we choose our LFDSs such that we can utilize a CNN or a GNN to process the data on the LFDS. We propose two sets of LFDSs: predefined LFDSs and data driven LFDSs. The structure of a predefined LFDS is chosen priorly. For instance, if the LFDS is a graph, all the connections between different nodes are chosen by the human designer. In contrast, with the data driven LFDSs, some details of the LFDS are learned in a data driven way. For instance, if the LFDS is a data driven graph, the adjacency matrix of the LFDS is defined as a parameter of the neural network to be learned during the training process.
Iii-A1 Predefined Structures
The structure of the predefined LFDSs is fixed and the neural network learns to perform feature aggregation over it. In this paper, we propose the following predefined LFDSs.
3D Tensor (Image): 2D CNN is a powerful neural architecture which uses a sequence of local feature aggregation layers and pooling layers to perform hierarchical feature extraction. If we choose the LFDS a 3D tensor, a 2D CNN can be utilized to process the data on the LFDS. Suppose the size of the 3D tensor is where is the number of pixels of the corresponding image and is the length of the feature vectors. Define tensor as the projection of onto the LFDS (next section presents methods which distribute/project input feature vectors onto the LFDS and obtain ). The CNN is applied to tensor to aggregate its feature vectors into a final representation vector in a data driven way.
Array: This LFDS paves the way to use a 1D CNN to process the data on the LFDS whose computation complexity and memory requirement are less than a 2D CNN. Define where is the length of the array corresponding to the LFDS. The 1D CNN is applied to to aggregate its feature vector into a final feature vector.
Sequence-Graph: The structure of this LFDS is similar to Array but we consider the array as a sequence graph which means that each row of is corresponding to one node of a sequence graph and a GNN (instead of 1D CNN) is used to process . Define as the adjacency matrix of the LFDS. The spatial convolution layer of the GNN which is applied to can be written as
where and are the parameters of the convolution layer. The matrix is given to the neural network as a fixed matrix. As a simple example, if then (in python notation).
Loop-Graph: This LFDS is similar to Sequence Graph but it is assumed that the nodes of the latent graph form a loop. Similarly, with this LFDS, a latent GNN is used to perform feature aggregation on and (3) describes the spatial graph convolution which is applied to . Matrix represents the adjacency matrix of a loop and it is given to the neural network as a fixed matrix.
Iii-A2 Data Driven Structures
In this section, we introduce an LFDS whose structure is learned in a data driven way. Graph is a flexible and descriptive structure. One important feature of a graph is that all the information about its structure lies in its adjacency matrix. This is a desired feature because we can simply define the adjacency matrix of the latent graph as a parameter of the neural network to be learned along with the other parameters of the neural network during the training process. A significant feature of the LFDS is that its structure is fixed for all the input graphs and both spatial convolution and spectral convolution can be utilized to perform feature aggregation over the latent graph. We present two methods based on the choice of the convolution layer.
Parametric Graph with Spatial Convolution: If spatial convolution is used in the latent GNN (which is applied to ), then we can define used in (3) as a parameter of the neural network. Accordingly, the neural network learns the connection between the nodes of the LFDS in a data driven way. However, we have to ensure that the learned adjacency matrix satisfies the properties of an adjacency matrix for undirected graphs, namely symmetry, nonnegativity of the entries and zeros entries on the diagonal. We can achieve these properties by imposing the appropriate structure on . In particular, instead of the adjacency matrix , we define as the neural network parameter which is used to construct as
denotes the sigmoid activation function. Accordingly,satisfies the required conditions by construction.
Parametric Graph with Spectral Convolutions: If spectral convolution is used to perform feature aggregation on the latent graph, we need to compute the Fourier Basis of the latent graph. A simple solution to avoid computing the Fourier Basis is to define the Fourier basis itself as a parameter of the neural network. Accordingly, if spectral convolution layers are used in the latent GNN which is applied to , we no longer need to define as a parameter of the neural network and we directly define as the Fourier Basis of the latent graph a parameter of the neural network. Accordingly, the spectral convolution applied to the latent graph can be written as
where both and are the parameters of the neural network. However, we need to ensure that the learned is orthonormal. Accordingly, if Parametric Graph with Spectral Convolutions is used as the LFDS, we add to the final cost function which is used to train the neural network where is a regularization coefficient. This regularizer encourages the optimizer to find on the manifold of orthonormal matrices.
Iii-B Sorting/Distributing the Input Data on the LFDS
In Section III-A, it was assumed that there is an algorithm which sorts/distributes the input feature vectors on the LFDS and form (the data on the LFDS). In this section, an algorithm is presented which is invariant to the permutation of the input feature vectors. In this algorithm, we define as a parameter of the neural network where is the number of nodes/elements of the LFDS. Each row of corresponds to one node/element of the LFDS and is used for sorting/distributing the input feature vectors on the LFDS. Note that if the LFDS is a 3D tensor as described in Section III-A, then .
Without loss of generality, in this section it is assumed that the LFDS is a graph which means that and the LFDS is composed of nodes. The presented algorithms are similarly applicable to all the other LFDSs. Each row of matrix/tensor can be considered as a learnable query and learnable queries have been used in the previous works  for feature aggregation. In this paper, we impose a (predefined or data driven) structure on the queries using the LFDS and the LFDS is utilized to aggregate the feature vectors obtained using the queries.
Projecting the Data on the LFDS. Each row of represents one node of the LFDS. We use the rows of to measure the relevance of each input feature vector with the nodes of the LFDS. Specifically, define vector corresponding to (the row of ) as
where is the row of , is the element of vector , and measures the similarity between its component where in this paper we use inner-product. Vector represents the similarity between and the nodes of the LFDS. Accordingly, we utilize to distribute on the LFDS by defining matrices as
which are used to compute as
In the proposed approach, we used the -dimensional rows of as representation vectors for the elements of the LFDS. The tensor/matrix is learned during the training process along with other parameters of the neural network. Since the -dimensional rows of represent the elements of the LFDS, we expect them to follow the structure of the LFDS. For instance, suppose the LFDS is an image and and are corresponding to the pixel and the pixel of the LFDS, respectively. We expect and to be coherent with each other in feature space if the the pixel and the pixel are close and vice versa. The optimizer learns the -dimensional feature vectors of such that the GNN/CNN can successfully perform feature aggregation on the LFDS and this happens when the coherency between the -dimensional feature vectors of follows the structure of the LFDS. In other word, the presence of the latent GNN/CNN automatically bounds the structure of the LFDS with the distribution of the rows of .
To ensure that the neural network uses all the elements of the LFDS and does not end up using a few elements, we employ an element dropout technique during the training process, i.e., in each training iteration, a random subset of the -dimensional rows of are set to zero.
The proposed approach utilizes the feature vectors in to project the information in onto the elements of the LFDS. There are other global feature aggregation methods such as global max-pooling which are
computationally efficient aggregation methods and we can utilize them inside the proposed method.
For instance, suppose we want to include the information obtained by
global max pooling in the proposed method. Define vector
onto the elements of the LFDS. There are other global feature aggregation methods such as global max-pooling which are computationally efficient aggregation methods and we can utilize them inside the proposed method. For instance, suppose we want to include the information obtained by global max pooling in the proposed method. Define vectoras the element-wise max pooling of the rows of . Each row of is a representation vector for the input data . Thus, we can simply aggregate with each row of . Using this technique, we can include the information obtained by any other global feature aggregation method in . In the presented experiments, we aggregated with all the rows of . Specifically, we updated each row of as where and are two positive coefficients which were defined as the parameters of the neural network to let the neural network learn the best linear combination of them.
|Max Pooling||49.2 9.8||80.5 3.1||83.3 3.2||69.2 4.5||71.2 2.5|
|Graph-Loop||55.2 2.2||80.9 2.9||83.5 3.5||74.3 3.8||72.4 3.4|
|Graph-Sequence||53.7 10.9||80.9 3.0||83.7 2.9||72.0 4.0||72.2 2.9|
|Data Driven (spatial)||51.4 10.6||80.4 3.3||84.6 2.8||72.0 4.0||72.5 4.4|
|Data Driven (spectral)||48.0 7.8||80.7 3.2||83.4 2.8||71.7 4.1||73.0 3.1|
|3D Tensor (image)||55.0 1.9||81.4 3.4||84.4 2.8||71.0 5.3||72.7 1.3|
|2D Tensor (array)||52.1 10.6||80.6 3.5||83.7 3.8||71.8 4.0||73.0 2.4|
|Sort-Pooling||49.7 2.3||80.5 4.5||82.4 2.5||63.2 3.7||67.7 3.0|
|Diff-Pool||42.5 16.7||80.4 3.4||84.4 2.7||70.3 5.2||71.8 3.3|
|Rank-PooL||39.8 11.8||80.7 3.1||83.7 3.8||67.5 7.7||71.5 2.8|
Iv Numerical Experiments
The proposed methods are compared with some of the existing graph feature aggregation methods including DiffPool , Node-Sort , and Rank-PooL [9, 19]. Following the conventional settings, we perform -fold cross validation: folds for training and 1 fold for testing. We study the performance of different feature aggregation methods with 5 datasets. The utilized datasets include 4 benchmark graph classification datasets and we refer the reader to  and the references therein for more information about them. We also created a graph classification dataset using point cloud data. In this new dataset (PC Graphs), the GNNs are trained to classify point clouds solely based on their nearest neighbour graphs. The location of the points in not provided and the GNN is required to distinguish point clouds in different classes based on the differences between their nearest neighbour graphs. The dataset is composed of nearly 4000 graphs and the graphs form 8 classes. Each graph corresponds to the nearest neighbour graph of a point cloud which is composed of 150 points. We use the points clouds in the ShapeNet dataset  and the chosen shapes are: Table, Airplane, Car, Guitar, Knife, Lamp, Chair, and Laptop. In the following, we describe the data preprocessing steps and our architecture of the neural networks.
The input to the neural networks. We follow the approach presented in [25, 26] to leverage both the node labels/attributes and the node embedding vectors. It was shown in [25, 26] that embedding vectors make the neural network more aware of the topological structure of the graph. The Deep-Walk graph embedding method  is used to embed the graphs and the dimension of the embedding vectors are set to . If is the size of the graph, the length of the random walks is determined as .
The structure of the basis neural network. In the presented experiments, the GNN equipped with spatial representation proposed in [25, 26] is used to extract the matrix of local feature vectors . The implemented shared neural network which extracts the local feature vectors is composed of three spatial convolution layers and each convolution layer contains two weight matrices as in (2
). The dimensionality of the output of all the convolution layers is equal to 64. Each convolution layer is equipped with batch-normalization
and ReLu is used as the element-wise nonlinear function. The output of the last convolution layers is used as the input of the non-local feature aggregation methods (except the Sort-Node method for which the concatenation of the outputs of all the three convolution layers was used).
The final representation vector of the input graph. Define as the concatenation of all the spatial convolution layers of the basis neural network and define as the element-wise max-pooling of the rows of . In addition, define as the output of the non-local aggregation method and define as the element-wise max-pooling of the rows of . The final representation of the graph for all the methods (except Sort-Node ) is obtained as the concatenation of vectors and . For the Sort-Node method , the output of the pooling method is used as the representation of the graph.
The structure of the proposed methods. In the following, we describe the architecture of the proposed approach with 2 LFDSs. The implemented architectures with the other LFDSs are similar.
Proposed method with 3D tensor as the LFDS: We define equal to a tensor with 64 feature vectors, i.e., . Two CNN convolution layers are used to process and the size of the convolution kernel in both the convolution layers is . Define as the aggregation (using max-pooling) of all the feature vectors of after the first convolution layer and define as the aggregation (using max-pooling) of all the feature vectors of after the second convolution layer. The vector is defined as the concatenation of and . The final representation of graph is defined as the concatenation of and . The dimensionality of the final representation vector is 320 (192 + 128 = 320).
Proposed method with graph as the LFDS: First suppose that the latent GCN, which is used to process , is composed of spatial convolution layers. The latent GCN is composed of two spatial convolution layers and the functionality of each convolution layer can be written as (3) where is the Relu function followed by Batch-normalization. If the LFDS is a data driven graph, is defined as a parameter of the neural network. Similar to the proposed method with 3D tensor, the output of both convolution layers are used to build and the last representation vector is built similarly.
If the latent GCN is made of spectral graph convolution layers, the functionality of each implemented spectral convolution layer can be written as
where is the Fourier basis of the latent graph, the diagonal matrix is the weight matrix of the spectral convolution layer, and perform feature transformation, and represents Relu function followed by Batch-normalization.
The final classifier: A 3-layer fully connected neural network transforms the final vector representation of the graph to a -dimensional vector where is the number of classes.
In order to avoid over-fitting, we utilize different dropout techniques: dropout on the final fully connected layers with probability 0.5, dropping out each node feature vectors with probability 0.2, and random dropout of the elements of the LFDS with probability 0.4 (see Remark2). The cross entropy function is used as the classification loss and all the neural networks are similarly trained using the Adam optimizer . The learning rate is initialized at and is reduced to during the training process.
Diff-Pool : This method was implemented according to the instructions in . Two spatial convolution layers were placed after the down-sizing step. The final representation of the graph was obtained similar to the procedure used in the proposed approach.
Rank-PooL [9, 19]: Similar to the other pooling methods, one down-sampling layer was used and two spatial convolution layers were implemented after the down-sizing step. The final representation of the graph was obtained similar to the procedure used in the proposed method.
Table 1 demonstrates the classification accuracy for all the datasets. One observation is that on most of the datasets, the proposed methods outperform the simple GNN. The main reason for achieving a higher performance is that the LFDS used in the proposed methods paves the way for the neural network to aggregate the extracted local feature vectors in a data driven way. In addition, on most of the datasets, the presented methods outperform the Sort-Node method proposed in  and the main reason is that the way Sort-Node sorts the nodes in the one dimensional array does not necessarily put relevant nodes close to each other. In contrast, in the proposed approach, the extracted local feature vectors are ordered/distributed on the LFDS using the data driven feature vectors and the presence of the latent GNN/CNN bounds with the structure of the LFDS.
V Future Works
In the proposed approach, each row of corresponds to an element/node of the LFDS and the similarly between the rows of and the rows of is used to compute . One might critique that a fixed set of feature vectors might not be diverse enough to work for all the graphs in the dataset and the the rows of should change conditioned on the input data. A possible extension to the proposed approach is an algorithm in which is generated by a side neural network whose input is the given graph. For instance, suppose , define as the max-pooling of the rows of , and assume is a function which maps to a dimensional matrix. The output of can be used as the weight matrix to ensure that the utilized feature vectors is conditioned on the input graph. A possible scenario is to define a set of candidate weight matrices such that the final weight matrix is computed as a weighted combination of them, i.e., where vector is computed via transforming using a fully connected neural network.
An end-to-end scalable framework for non-local feature aggregation over graphs and deep analysis of unordered data was proposed. The proposed approach projects the unordered feature vectors over a Latent Fixed Data Structure (LFDS) and the structure of the LFDS is used to aggregate the projected local feature vectors. It has been shown that the proposed approach can be used to design several new feature aggregation methods. We have introduced multiple structures for the LFDS including graph, tensor, and array. It was shown that the LFDS can be predefined and it can also to be a learnable graph. If the LFDS is data driven (learnable graph), the adjacency matrix of the latent graph is defined as the parameter of the neural network. Overall, the presented experiments show that the proposed methods achieve a competitive performance.
-  James Atwood and Don Towsley. Diffusion-convolutional neural networks. In Advances in Neural Information Processing Systems (NIPS), pages 1993–2001, Barcelona, Spain, 2016.
-  Michael M. Bronstein, Joan Bruna, Yann LeCun, Arthur Szlam, and Pierre Vandergheynst. Geometric deep learning: Going beyond euclidean data. IEEE Signal Process. Mag., 34(4):18–42, 2017.
-  Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. Spectral networks and locally connected networks on graphs. In Proceedings of the 2nd International Conference on Learning Representations (ICLR), Banff, Canada, 2014.
-  Angel X. Chang, Thomas A. Funkhouser, Leonidas J. Guibas, Pat Hanrahan, Qi-Xing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, Jianxiong Xiao, Li Yi, and Fisher Yu. ShapeNet: An information-rich 3d model repository. arXiv preprint arXiv:1512.03012, 2015.
-  Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in Neural Information Processing Systems (NIPS), pages 3837–3845, Barcelona, Spain, 2016.
-  David Duvenaud, Dougal Maclaurin, Jorge Aguilera-Iparraguirre, Rafael Gómez-Bombarelli, Timothy Hirzel, Alán Aspuru-Guzik, and Ryan P. Adams. Convolutional networks on graphs for learning molecular fingerprints. In Advances in Neural Information Processing Systems (NIPS), pages 2224–2232, Montreal, Canada, 2015.
-  Matthias Fey, Jan Eric Lenssen, Frank Weichert, and Heinrich Müller. SplineCNN: Fast geometric deep learning with continuous b-spline kernels. In , pages 869–877, Salt Lake City, UT, 2018.
-  Alex Fout, Jonathon Byrd, Basir Shariat, and Asa Ben-Hur. Protein interface prediction using graph convolutional networks. In Advances in Neural Information Processing Systems (NIPS), pages 6530–6539, Long Beach, CA, 2017.
Hongyang Gao and Shuiwang Ji.
Proceedings of the 36th International Conference on Machine Learning (ICML), pages 2083–2092, Long Beach, CA, 2019.
-  Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. Neural message passing for quantum chemistry. In Proceedings of the 34th International Conference on Machine Learning (ICML), pages 1263–1272, Sydney, Australia, 2017.
-  William L. Hamilton, Rex Ying, and Jure Leskovec. Representation learning on graphs: Methods and applications. IEEE Data Eng. Bull., 40(3):52–74, 2017.
-  Mikael Henaff, Joan Bruna, and Yann LeCun. Deep convolutional networks on graph-structured data. arXiv preprint arXiv:1506.05163, 2015.
-  Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32nd International Conference on Machine Learning (ICML), pages 448–456, Lille, France, 2015.
-  Kristian Kersting, Nils M. Kriege, Christopher Morris, Petra Mutzel, and Marion Neumann. Benchmark data sets for graph kernels, 2016.
-  Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations (ICLR), San Diego, CA, 2015.
-  Thomas N. Kipf, Ethan Fetaya, Kuan-Chieh Wang, Max Welling, and Richard S. Zemel. Neural relational inference for interacting systems. In Proceedings of the 35th International Conference on Machine Learning (ICML), pages 2693–2702, Stockholmsmässan, Stockholm, Sweden, 2018.
-  Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In Proceedings of the 5th International Conference on Learning Representations (ICLR), Toulon, France, 2017.
-  Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks. pages 1106–1114, 2012.
-  Junhyun Lee, Inyeop Lee, and Jaewoo Kang. Self-attention graph pooling. In Proceedings of the 36th International Conference on Machine Learning (ICML), pages 3734–3743, Long Beach, CA, 2019.
-  Ron Levie, Federico Monti, Xavier Bresson, and Michael M. Bronstein. CayleyNets: Graph convolutional neural networks with complex rational spectral filters. IEEE Trans. Signal Process., 67(1):97–109, 2019.
-  Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard S. Zemel. Gated graph sequence neural networks. In Proceedings of the 4th International Conference on Learning Representations (ICLR), San Juan, Puerto Rico, 2016.
-  Dang Nguyen, Wei Luo, Tu Dinh Nguyen, Svetha Venkatesh, and Dinh Q. Phung. Learning graph representation via frequent subgraphs. In Proceedings of the 2018 SIAM International Conference on Data Mining (SDM), pages 306–314, San Diego, CA, 2018.
-  Mathias Niepert, Mohamed Ahmed, and Konstantin Kutzkov. Learning convolutional neural networks for graphs. In Proceedings of the 33nd International Conference on Machine Learning (ICML), pages 2014–2023, New York City, NY, 2016.
-  Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. Deepwalk: online learning of social representations. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), pages 701–710, New York, NY, 2014.
-  Mostafa Rahmani and Ping Li. Graph analysis and graph pooling in the spatial domain. arXiv preprint arXiv:1910.01589, 2019.
-  Mostafa Rahmani and Ping Li. The necessity of geometrical representation for deep graph analysis. In Proceedings of the 20th IEEE International Conference on Data Mining (ICDM), pages 1232–1237, Sorrento, Italy, 2020.
-  Michael Sejr Schlichtkrull, Thomas N. Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. Modeling relational data with graph convolutional networks. In Proceedings of the 15th International Conferenc on the Semantic Web (ESWC), volume 10843, pages 593–607, Heraklion, Crete, Greece, 2018.
-  Anshumali Shrivastava and Ping Li. A new space for comparing graphs. In Proceedings of the 2014 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), pages 62–71, Beijing, China, 2014.
-  Martin Simonovsky and Nikos Komodakis. Dynamic edge-conditioned filters in convolutional neural networks on graphs. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 29–38, Honolulu, HI, 2017.
-  Antoine J.-P. Tixier, Giannis Nikolentzos, Polykarpos Meladianos, and Michalis Vazirgiannis. Graph classification with 2d convolutional neural networks. In Proceedings of the 28th International Conference on Artificial Neural Networks (ICANN), pages 578–593, Munich, Germany, 2019.
-  Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems (NIPS), pages 5998–6008, Long Beach, CA, 2017.
-  Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph attention networks. arXiv preprint arXiv:1710.10903, 2017.
-  Xiaolong Wang, Ross B. Girshick, Abhinav Gupta, and Kaiming He. Non-local neural networks. In Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 7794–7803, Salt Lake City, UT, 2018.
-  Zhitao Ying, Jiaxuan You, Christopher Morris, Xiang Ren, William L. Hamilton, and Jure Leskovec. Hierarchical graph representation learning with differentiable pooling. In Advances in Neural Information Processing Systems (NeurIPS), pages 4805–4815, Montréal, Canada, 2018.
Muhan Zhang, Zhicheng Cui, Marion Neumann, and Yixin Chen.
An end-to-end deep learning architecture for graph classification.
Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI), pages 4438–4445, New Orleans, LA, 2018.
-  Zhen Zhang, Jiajun Bu, Martin Ester, Jianfeng Zhang, Chengwei Yao, Zhi Yu, and Can Wang. Hierarchical graph pooling with structure learning. arXiv preprint arXiv:1911.05954, 2019.