Introduction
Although the Convolutional Neural Networks (CNNs) have been proven supremely successful on a wide range of machine learning problems
[Hinton et al.2012, Dundar et al.2015], they generally require inputs to be tensors. For instance, images and videos are modeled as 2D and 3D tensor separately. However, in many real problems, the data are on irregular grid or more generally in nonEuclidean domains, e.g. chemical molecules, point cloud and social networks. Instead of regularly shaped tensors, those data are better to be structured as graph, which is capable of handling varying neighborhood vertex connectivity as well as nonEuclidean metric. Under the circumstances, the stationarity and the compositionality, which allow kernelbased convolutions on grid, are no longer satisfied. Therefore, it is necessary to reformulate the convolution operator on graph structured data.
However, a feasible extension of CNNs from regular grid to irregular graph is not straightforward. For simplicity of constructing convolution kernel, the early graph CNNs usually assume that data is still lowdimensional [Bruna et al.2013, Henaff, Bruna, and LeCun2015]. Because the convolver handled nodes separately according to node degree. And their convolution kernel is overlocalized and infeasible to learn hierarchical representations from complex graphs with unpredictable and flexible node connectivity, e.g molecules and social networks.
In some cases, e.g classification of point cloud, the topological structure of graph is more informative than vertex feature. Unfortunately, the existing graph convolution can not thoroughly exploit the geometric property on graph due to the difficulty of designing a parameterized spatial kernel matches a varying number of neighbors [Shuman et al.2013]. Besides, considering the flexibility of graph and the scale of parameter, learning a customized topologypreserving spatial kernel for every unique graph is impractical.
Beyond spatial convolution on restricted graphs, spectral networks, based on graph Fourier transform, offer an elastic kernel
[Defferrard, Bresson, and Vandergheynst2016]. Inherited from classical CNNs, a shared kernel among samples is still assumed. Consequently, to ensure the unified dimensionality of layer output, the inputs have to be resized, which is also a constraint of classical CNNs. However, this kind of preprocessing on graph data may destroy the completeness of graphoriented information. For instance, the coarsening of molecule is hard to be justified chemically, and it is likely that the coarsened graph has lost the key substructures that differentiate the molecule from others. In Figure. 1, removing any Carbon atom from the graph breaks the Benzene ring. It would be much better if the graph CNNs could accept original data samples of diverse graph structures.Lastly, the data we feed to graph CNNs either have an intrinsic graph structure or we can construct one by clustering. At previous graph CNNs, the initial graph structure will be fixed during the training process [Bruna et al.2013]
. But, it is hard to evaluate if the graphs constructed by unsupervised clustering (or from domain knowledge) are optimal for supervised learning tasks. Although the supervised graph construction with fully connected networks has been proposed
[Henaff, Bruna, and LeCun2015], their dense training weights restrict the model to small graphs. Furthermore, the graph structures learned from a separate network are not guaranteed to best serve the graph convolutions.The bottlenecks of current graph CNNs include:

restrict graph degree;

require identical graph structure shared among inputs;

fixed graph constructed without training;

incapability of learning from topological structure.
In the paper, we propose a novel spectral graph convolution network that feed on original data of diverse graph structures. e.g the organic molecules that consist of a different number of benzene rings. To allow that, instead of shared spectral kernel, we give each individual sample in batch a customized graph Laplacian that objectively describes its unique topology. A customized graph Laplacian will lead to a customized spectral filter that combines neighbor features according to its unique graph topology.
It is interesting to question what exact graph best serves a supervised learning task. For example, the chemical bonds naturally build a graph for a compound. However, it is never guaranteed that the convolver that works on intrinsic graph has extracted all meaningful features. Therefore, we train a socalled residual graph to discover the residual substructures that the intrinsic graph never includes. Moreover, to ensure that the residual graph is the best supplement for particular task, we design a scheme to learn the residual graph during training the rest of graph CNN.
Direct learning of graph Laplacian costs complexity for a graph of vertices. Allowing unique graph topology preserved in training samples means learning unique graph Laplacian, which is highly costly. If harnessing a supervised metric learning with Mahalanobis distance, we could reduce the parameter number to or even , assuming metric parameters are shared across samples. As a consequence, the learning complexity becomes independent of graph size . In classical CNNs, backpropagation generally updates kernel weights to adjust the relationship between neighboring nodes at each feature dimension individually. Then it sums up signals from all filters to construct hiddenlayer activations. To grant graph CNNs a similar capability, we propose a reparameterization on the feature domain with additional transform weights and bias. Finally, the total training parameters in the convolution layer consist of two parts: distance metric, and the vertex feature transform and bias. Given the trained metric and transformed feature space, the updated residual graph is able to be constructed.
In experiments, we explore the proposed spectral convolution network on multiple graphstructured datasets including chemical molecules and point cloud generated by LIDAR. The innovations of our graph CNN are summarized as below:

Construct unique graph Laplacian. Construct and learn unique residual Laplacian matrix for each individual sample in batch, and the learned residual graph Laplacian will be added onto the initial (clustered or intrinsic) one.

Learn distance metric for graph update. Through learning the optimal distance metric parameters shared among the data, the topological structures of graph are updated along with the training of prediction network. The learning complexity is cheap as , independent of input size.

Feature embedding in convolution. Transforming of vertex features is done before convolution connecting both intra and intervertex features on graph.

Accept flexible graph inputs. Because of 1 and 2, the proposed network can be fed on data of different graph structure and size, unlocking restrictions on graph degree.
Related Work
Spectral Graph Convolution
The first trial of formulating an analogy of CNN on graph was accomplished by [Bruna et al.2013]. Particularly, the spatial convolution summed up the features from neighborhood defined by graph adjacency matrix . The finitesize kernel is nonparametric but overlocalized. The convolution layer was reduced to an analog of fully connected layer with sparse transform matrix given by . Spatial convolution has intrinsic difficulty of matching varying local neighborhoods, so there is no unified definition of spatial filtering on graph without strict restriction on graph topology. Spectral graph theory [Chung1997] makes it possible to construct convolution kernel on spectrum domain, and the spatial locality is supported by the smoothness of spectrum multipliers. The baseline approach of the paper is built upon [Eq(3), [Defferrard, Bresson, and Vandergheynst2016]] that extended the onehop local kernel to the one that brought at most hop connectivity. According to graph Fourier transform, if is the set of graph Fourier basis of ,
(1) 
is the frequency components of . [Defferrard, Bresson, and Vandergheynst2016] also utilized Chebyshev polynomials and its approximate evaluation scheme to reduce the computational cost and achieve localized filtering. [Kipf and Welling2016] showed a firstorder approximation to the Chebyshev polynomials as the graph filter spectrum, which requires much less training parameters. Even though, [De Brabandere et al.2016, Simonovsky and Komodakis2017, Looks et al.2017] have started to construct customized graphs with more emphasis on topological structure, or even unlock the constraint on input graph dimensionality, designing a more flexible graph CNN is still an open question.
Neural Networks on Molecular Graph
The predictions on checmical property of organic molecule were usually handled by handcrafted features and feature embedding [Mayr et al.2016, Weiss, Torralba, and Fergus2009]. Since molecules are naturally modeled as graph, [Duvenaud et al.2015, Wallach, Dzamba, and Heifets2015, Wu et al.2017] have made several successful trials of constructing neural networks on raw molecules for learning representations. However, due to the constraints of spatial convolution, their networks failed to make full use of the atomconnectivities, which are more informative than the few bond features. More recent explorations on progressive network, multitask learning and lowshot or oneshot learning have been accomplished [AltaeTran et al.2016, Gomes et al.2017]. So far, the stateoftheart network on molecules [Wallach, Dzamba, and Heifets2015, Duvenaud et al.2015] still use nonparameterized spatial kernel that can not fully exploit spatial information. Besides, the topological structures can be rich sources of discriminative features.
Method
SGCLL Layer
In order to make the spectral convolution kernel truly feasible with the diverse graph topology of data, we parameterize the distance metrics, so that the graph Laplacian itself becomes trainable. With the trained metrics, we dynamically construct unique graph for input samples of different shape and size. The new layer conducts convolution with localized spectral filter constructed on adaptive graph. In the meanwhile, the graph topological structures of samples get updated minimizing training losses. The new Spectral Graph Convolution layer with graph Laplacian Learning is named as SGCLL. In this section, we introduce the innovations of SGCLL layer.
Learning Graph Laplacian
Given graph and its adjacency matrix and degree matrix , the normalized graph Laplacian matrix is obtained by :
(2) 
Obviously, determines both the nodewise connectivity and the degree of vertices. Knowing matrix means knowing the topological structure of graph . Because
is a symmetric positive definite matrix, its eigendecomposition gives a complete set of eigenvectors
formed by , is the number of vertices. Use as graph Fourier basis, graph Laplacian is diagonalized as . Similar to Fourier transform on Euclidean spaces, graph Fourier transform, defined as , converts graph signal (primarily vertex feature) to spectral domain spanned by basis . Because the spectral representation of graph topology is , the spectral filter indeed generates customized convolution kernel on graph in space. [Chung1997] tell us that a spectrum formed by smooth frequency components results in localized spatial kernel. The main contribution of [Defferrard, Bresson, and Vandergheynst2016] is to formulate as a polynomial:(3) 
which brings us an localized kernel that allows any pair of vertices with shortest path distance to squeeze in. Also, the faraway connectivity means less similarity and will be assigned less importance controlled by . Polynomial filter smoothen the spectrum, while parameterization by also forces a circular distribution of weights in resulted kernel from central vertex to farthest hop vertices. This restricts the flexibility of kernel. What’s more important is that the similarity between two vertices is essentially determined by the selected distance metrics and the feature domain. For data deployed in nonEuclidean domain, the Euclidean distance is no longer guaranteed to be the optimal metrics for measuring similarity. Therefore, it is possible that the similarity between connected nodes is lower than those disconnected because the graphs are suboptimal. And there are two possible reasons:

The graphs were constructed in raw feature domain before feature extraction and transform.

The graph topology is intrinsic, and it merely represents physical connections, e.g the chemical bonds in molecule.
To unlock the restrictions, we propose a new spectral filter that parameterizes the Laplacian instead of the coefficients . Given original Laplacian , features and parameters , the function outputs the spectrum of updated Laplacian , then the filter will be:
(4) 
Finally, the SGCLL layer is primarily formulated as:
(5) 
Evaluating Eq.(5) is of complexity due to the dense matrix multiplication . If was approximated as a polynomial function of that could be calculated recursively, the complexity would be reduced to due to the sparsity of Laplacian . We choose the same Chebychev expansion as [Defferrard, Bresson, and Vandergheynst2016] to compute polynomial of order .
Training Metric for Graph Update
For graph structured data, the Euclidean distance is no longer a good metric to measure vertex similarity. Therefore, the distance metric need to be adaptive along with the task and the features during training. In articles of metrics learning, the algorithms were divided into supervised and unsupervised learning
[Wang and Sun2015]. The optimal metric obtained in unsupervised fashion minimizes the intracluster distances and also maximizes the intercluster distances. For labeled datasets, the learning objective is to find the metric minimizes the loss. Generalized Mahalanobis distance between and is formulated as:(6) 
If , Eq.(6) reduces to the Euclidean distance. In our model, the symmetric positive semidefinite matrix , where is one of the trainable weights of SGCLL layer. The is the transform basis to the space where we measure the Euclidean distance between and . Then, we use the distance to calculate the Gaussian kernel:
(7) 
After normalization of , we obtain a dense adjacency matrix . In our model, the optimal metric is the one that build the graph Laplacian set minimizing the predictive losses.
Reparameterization on feature transform
In classical CNNs, the output feature of convolution layer is the sum of all the feature maps from last layer in which they were calculated by independent filters. This means the new features are not only built upon the neighbor vertices, but also depend on other intravertex features. However, on graph convolution, it is not explainable to create and train separate topological structures for different vertex features on the same graph. In order to construct mappings of both intra and intervertex features, at SGCLL layer, we introduce a transform matrix and bias vector applied on output features. Based on Eq.(
5), the reparameterization on output feature is formulated as:(8) 
At th layer the transform matrix and the bias are trained along with metrics , where is the feature dimensionality. Totally, at each SGCLL layer, we have the parameters of learning complexity, independent of input graph size or degree. At next SGCLL layer, the spectral filter will be built in another feature domain with different metrics.
Residual Graph Laplacian
Some graph data have intrinsic graph structures, such as molecules. Molecule is modeled as molecular graph with atom as vertex and bond as edge. Those bonds could be justified by chemical experiments. But, the most of data do not naturally have graph structure, so we have to construct graphs before feed them to the network. Besides above two cases, the most likely case is that the graphs created in unsupervised way can not sufficiently express all of the meaningful topological structure for specific task. Use chemical compound as example, the intrinsic graph given by SMILES [Weininger1988] sequence does NOT tell anything about the toxicity of compound. Merely on intrinsic graph, it is hard to learn the meaningful representations of toxicity.
Because there is no prior knowledge on distance metric, the metrics are randomly initialized, so it may take long to converge. In order to accelerate the training and increase the stability of learned graph topology structure, we announce a reasonable assumption that the optimal graph Laplacian is a small shifting from the original graph Laplacian :
(9) 
In other words, the original graph Laplacian has disclosed a large amount of helpful graph structural information, except for those substructures consist of virtual vertex connections that can not be directly learned on intrinsic graph. Therefore, instead of learning , we learn the residual graph Laplacian , which is evaluated by Eq.(7) and Eq.(2). The influence of on final graph topology is controlled by . The operations in SGCLL layer are summarized as Algorithm 1.
AGCN Network
The proposed network is named as the Adaptive Graph Convolution Network (AGCN), because the SGCLL layer is able to efficiently learn adaptive
graph topology structure according to the data and the context of learning task. Besides SGCLL layer, the AGCN has graph max pooling layer and graph gather layer
[Gomes et al.2017].Graph Max Pooling
The graph max pooling is conducted featurewise. For feature at th vertex of graph, the pooling replaces the th feature with the maximum one among the th feature from its neighbor vertices and himself. If is the set of neighbor vertices of , the new feature at vertex : .
Graph Gather
The graph gather layer elementwise sums up all the vertex feature vectors as the representation of graph data. The output vector of gather layer will be used for graphlevel prediction. Without the graph gather layer, the AGCN can also be trained and used for vertexwise prediction tasks, given labels on vertex. The vertexwise predictions include graph completion and many predictions on social networks.
Bilateral Filter
The purpose of using bilateral filter layer [Gadde et al.2016] in AGCN is to prevent overfitting. The residual graph Laplacian definitely adapts the model to better fit the training task, but, at the risk of overfitting. To mitigate overfitting, we introduce a revised bilateral filtering layer to regularize activation of SGCLL layer by augmenting the spatial locality of
. We also introduced batch normalization layers to accelerate the training
[Ioffe and Szegedy2015].Network Configuration
The AGCN consists of multiple consecutive layer combos, the core layer of which is SGCLL layer. The layer combo comprises one SGCLL layer, one batch normalization layer [Ioffe and Szegedy2015] and one graph max pooling layer. See Figure. 3 for illustration. A residual graph Laplacian is trained at each SGCLL layer. At the graph pooling layer that follows, the adaptive graph (intrinsic + residual graph) is reused until next SGCLL layer, because SGCLL transform features, so the next SGCLL need to retrain a new residual graph.
After passing a layer combo, the graph structures in batch will be updated, while the graph sizes remain. Because for data like organic compound, small substructures are decisive on specific chemical property, e.g toxicity. For instance, aromatic hydrocarbons are usually strongly toxic, while if the hydrogen (H) atom was replaced by methyl group (CH3), their toxicity would be greatly reduced. Therefore, any graph coarsening or feature averaging will damage the completeness of those informative local structures. So, we choose max pooling and do not skip any vertex in convolution. In the paper, we test the network on graphwise prediction tasks. So, the graph gather layer is the last layer before regressors.
Batch Training of Diverse Graphs
One of the greatest challenges for conducting convolution on graphstructured data is the difficulty of matching the diverse local topological structures of training samples: 1) bring extra difficulty of designing convolution kernel, because the invariance of kernel is not satisfied on graph, and the node indexing sometimes matters; 2) Resizing or reshaping of graph is not reasonable for some data e.g molecules. Different from images and videos, which work with classical convolution on tensor, the compatibility with diverse topology is necessary for convolution on graph. The proposed SGCLL layer train separate graph Laplacian, that preserve all local topological structures of data. Because we find that it is the feature space and the distance metrics that actually matter in constructing graph structure, the SGCLL layer only requires all samples in batch to share the same feature transform matrix and distance metrics. Furthermore, the training parameter number is only dependent on feature dimensionality. Therefore, the AGCN accepts training batch consist of raw graphstructured data samples with different topology and size. It is noted that additional RAM consumption will be brought by the initial graph Laplacians that need to be constructed before training, and we still need to keep them for updating kernels. But, it is acceptable because graph Laplacians are usually sparse.
Experiments
In the experiments, we compared our AGCN network with the stateoftheart graph CNNs. [Bruna et al.2013]
implemented convolution with a spectral filter formed by linear Bspline interpolation, referred as graphconv. Neural fingerprint
[Duvenaud et al.2015], referred as NFP, is the cuttingedge neural network for molecules. It uses kernel constructed in spatial domain. We refer to the graph CNN equipped with localized spectral filter as GCN [Defferrard, Bresson, and Vandergheynst2016]. In this section, extensive numerical results show that our AGCN outperforms all existing graph CNNs, and we explain how the proposed SGCLL layer boost the performance.Performance boosted by SGCLL Layer
The spectral filter at SGCLL Layer is constructed on adaptive graph that consists of individual graph and residual graph. Individual graph is either intrinsic graph directly given by data itself or from clustering. Individual graph which enables the network to read data of different structure. Furthermore, the graphs will be updated during training since the network is trained to optimize the distance metric and feature transform for training data. The experiment demonstrated a close correlation between the updated graph and network performance. In Figure. 4, if zoom in, it is easy to find the significant difference on node similarity after 20 epochs. This means the graph structure of compound in the trained distance metric has been updated. In the meanwhile, the weighted losses dropped dramatically during the first 20 epoch, so did the mean RMSE score. Besides, the RMSE and losses curves proved that the AGCN (red line) has overwhelmingly better performance than other graph CNNs in both convergence speed and predictive accuracy (Figure. 5). We attribute this improvement to the adaptive graph and the learning of residual Laplacian at SGCLL layer.
Datasets  Delaney solubility  AzlogD  NCI  Hydrationfree energy 

graphconv  0.4222 8.38  0.7516 8.42  0.8695 3.55  2.0329 2.70 
NFP  0.4955 2.30  0.9597 5.70  0.8748 7.50  3.4082 3.95 
GCN  0.4665 2.07  1.0459 3.92  0.8717 4.14  2.2868 1.37 
AGCN  0.3061 5.34  0.7362 3.54  0.8647 4.67  1.3317 2.73 
Mean and standard deviation of RMSE on Delaney, AzlogD, NIHNCI and Hydrationfree energy Datasets. Compare AGCN with graphconv
[Bruna et al.2013], NFP [Duvenaud et al.2015], GCN [Defferrard, Bresson, and Vandergheynst2016]Datasets  Tox21  ClinTox  Sider  Toxcast  

Validation  Testing  Validation  Testing  Validation  Testing  Validation  Testing  
graphconv  0.7105  0.7023  0.7896  0.7069  0.5806  0.5642  0.6497  0.6496 
NFP  0.7502  0.7341  0.7356  0.7469  0.6049  0.5525  0.6561  0.6384 
GCN  0.7540  0.7481  0.8303  0.7573  0.6085  0.5914  0.6914  0.6739 
AGCN  0.7947  0.8016  0.9267  0.8678  0.6112  0.5921  0.7227  0.7033 
Multitask Prediction on Molecular Datasets
Delaney Dataset [Delaney2004] contains aequeous solubility data for 1,144 low molecular weight compounds. The largest compound in the dataset has 492 atoms, while the smallest only consists of 3 atoms. NCI Database has around 20,000 compounds and 60 prediction tasks from drug reaction experiments to clinical pharmacology studies. At last, AzlogD dataset from ADME [Vugmeyster, Harrold, and Xu2012] offers the logD measurements on permeability for 4200 compounds. Besides, we also have a small dataset of 642 compounds for hydrationfree energy study. The presented taskaveraged RMSE scores and standard deviations were obtained after 5fold crossvalidation.
Tox21 Dataset [Mayr et al.2016] contains 7,950 chemical compounds and labels for classifications on 12 essays of toxicity. However, additional difficulty comes from the missing labels for part of the 12 tasks. For those with missing labels, we excluded them from loss computation, but still kept them in training set. ClinTox is a public dataset of 1451 chemical compounds for clinical toxicological study together with labels for 2 tasks. Sider [Kuhn et al.2010] database records 1392 drugs and their 27 different side effects or adverse reactions. Toxcast [Dix et al.2006] is another toxicological research database that has 8,599 SMILES together with labels for 617 predictive tasks. For task prediction, the network graph model will become an analog of Kary tree with
leaf nodes, each of which comprises a fully connected layer and a logistic regression for each task.
To prove the advantages of AGCN, we compared it with three stateoftheart graph CNN benchmarks: the first spectral graph CNN (graphconv) with spline interpolated kernel [Bruna et al.2013], the extension to localized spectral filter (GCN) [Defferrard, Bresson, and Vandergheynst2016] and neural fingerprint (NFP) [Duvenaud et al.2015], the cuttingedge neural network for molecules. In Table. 1, our AGCN reduced the mean RMSE by 31 40 on Delaney dataset, averagely 15 on AzlogD and 24 on testing set of NCI. It looks the adaptive graph and the residual Laplacian learning for hidden structures are more useful when data is short. As to the multitask classification results from Table. 2
, we notice that the AGCN significantly boosted the accuracy on both small and large datasets. For the mass of 617 tasks of Toxcast, the performance of classifier still got improved by 3
(0.03) on average, compared to the stateofthearts.Molecular graph, directly given by chemical formula, is the intrinsic graph for compound data. They come with high variety in both topological structure and graph size. The spectral kernel in graphconv [Bruna et al.2013] can only connect 1hop neighbor (nearby vertex directly connected by edge), so it is overlocalized. This becomes an issue when dealing with molecules, because some important substructures of molecular graph are impossible to be covered by overlocalized kernels. For example, centered at any carbon atom of Benzene ring (C6H6), the kernel at least needs to cover the vertices of distance 3, if you want to learn representation from the ring as a whole. The localized kernel in GCN [Defferrard, Bresson, and Vandergheynst2016] is no longer too local, but the kernel is still assumed to be shared among data. It is fine if the molecules in training set share many common substructures such as OH (carbonyl group) and C6H6 (Benzene). See Figure. 2
for illustration. However, if the molecules are from different classes of compound, GCN may not work well especially when data from some type are short. This is probably why the GCN has similar performance as AGCN on large datasets such as the Sider, but it dramatically worsened on small datasets, e.g Delaney and Clintox.
The AGCN is able to handle molecular data in a better way. The adaptive graph allows input samples to have unique graph Laplacian, so each compound indeed has its unique convolution filter customized according to its unique topological structure. Because of this capability, we can feed the network on the original data (atom/edge features and molecular graph) without any loss of information. Furthermore, our SGCLL layers train the distance metric minimizing predictive losses of specific tasks together with other transform parameters. Therefore, when it converged, at each SGCLL, we would find the optimal feature space and distance metric to construct the graph that best serve the task, e.g. toxicity and solubility prediction. This learned graph may contain new edges that did not exist in original molecular graph.
Point Cloud Object Classification
The Sydney urban point cloud dataset contains street objects scanned with a Velodyne HDL64E LIDAR, collected in the CBD of Sydney, Australia. There are 631 individual scans of objects across 26 classes. Due to the actual size and shape of object highly differ, the numbers of received point for different objects also vary (see Figure. 6 for illustration).
Before feed point sets to previous CNNs, we need to unify the size by downsampling. Coarsened samples must lose part of structural information. While, the AGCN overcomes such drawback by accepting raw point sets of different size. Previous graph convolution share an identical kernel, but, the shared one may mix up features on points regardless of the actual distance. While, the AGCN is able to do convolution exactly according to the spatial relations. The initial graphs of point cloud were constructed by agglomerative clustering. The cuttingedge method on point set recognition, PointNet [Qi et al.2016], cannot handle varying sized point cloud data.
All Classes  Building  Traffic Light  

graphconv  0.6523  0.6754  0.5197 
NFP  0.6350  0.8037  0.5344 
GCN  0.6657  0.8427  0.7417 
AGCN  0.6942  0.9906  0.8556 
After 5fold crossvalidation, averaged ROCAUC scores were calculated on a testing set that has 200 samples. From Table. 3, we see the AGCN outperformed other graph CNNs by on all classes average score. For specific large objects like building, we have the AUC score close to 1, while other networks did worse because they have to coarsen the graphs first. For important road objects such as traffic light, the AGCN also lifted the classification accuracy by at least in terms of ROCAUC. It sufficiently showed that the AGCN can extract more meaningful features than other graph CNNs on point clouds. The information completeness of data fed to the AGCN also benefit the performance, which is attributed to the adaptive graphs constructed and learned at proposed SGCLL layers.
Acknowledgments
This work was partially supported by US National Science Foundation IIS1423056, CMMI1434401, CNS1405985, IIS1718853 and the NSF CAREER grant IIS1553687.
Conclusions
We proposed a novel spectral graph convolver (SGCLL) that work with adaptive graphs. SGCLL learns the residual graph Laplacian via learning the optimal metric and feature transform. As far as we know, the AGCN is the first graph CNN that accepts data of arbitrary graph structure and size. The supervised training of residual Laplacian drives the model to better fit the prediction task. The extensive multitask learning experiments on various graphstructured data indicated that the AGCN outperformed the stateoftheart graph CNN models on various prediction tasks.
References
 [AltaeTran et al.2016] AltaeTran, H.; Ramsundar, B.; Pappu, A. S.; and Pande, V. 2016. Low data drug discovery with oneshot learning. arXiv preprint arXiv:1611.03199.
 [Bruna et al.2013] Bruna, J.; Zaremba, W.; Szlam, A.; and LeCun, Y. 2013. Spectral networks and locally connected networks on graphs. arXiv preprint arXiv:1312.6203.
 [Chung1997] Chung, F. R. 1997. Spectral graph theory. Number 92. American Mathematical Soc.
 [De Brabandere et al.2016] De Brabandere, B.; Jia, X.; Tuytelaars, T.; and Van Gool, L. 2016. Dynamic filter networks. In Neural Information Processing Systems (NIPS).
 [Defferrard, Bresson, and Vandergheynst2016] Defferrard, M.; Bresson, X.; and Vandergheynst, P. 2016. Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in Neural Information Processing Systems, 3837–3845.

[Delaney2004]
Delaney, J. S.
2004.
Esol: estimating aqueous solubility directly from molecular structure.
Journal of chemical information and computer sciences 44(3):1000–1005.  [Dix et al.2006] Dix, D. J.; Houck, K. A.; Martin, M. T.; Richard, A. M.; Setzer, R. W.; and Kavlock, R. J. 2006. The toxcast program for prioritizing toxicity testing of environmental chemicals. Toxicological Sciences 95(1):5–12.

[Dundar et al.2015]
Dundar, M.; Kou, Q.; Zhang, B.; He, Y.; and Rajwa, B.
2015.
Simplicity of kmeans versus deepness of deep learning: A case of unsupervised feature learning with limited data.
In Machine Learning and Applications (ICMLA), 2015 IEEE 14th International Conference on, 883–888. IEEE.  [Duvenaud et al.2015] Duvenaud, D. K.; Maclaurin, D.; Iparraguirre, J.; Bombarell, R.; Hirzel, T.; AspuruGuzik, A.; and Adams, R. P. 2015. Convolutional networks on graphs for learning molecular fingerprints. In Advances in neural information processing systems, 2224–2232.

[Gadde et al.2016]
Gadde, R.; Jampani, V.; Kiefel, M.; Kappler, D.; and Gehler, P. V.
2016.
Superpixel convolutional networks using bilateral inceptions.
In
European Conference on Computer Vision
, 597–613. Springer.  [Gomes et al.2017] Gomes, J.; Ramsundar, B.; Feinberg, E. N.; and Pande, V. S. 2017. Atomic convolutional networks for predicting proteinligand binding affinity. arXiv preprint arXiv:1703.10603.
 [Henaff, Bruna, and LeCun2015] Henaff, M.; Bruna, J.; and LeCun, Y. 2015. Deep convolutional networks on graphstructured data. arXiv preprint arXiv:1506.05163.
 [Hinton et al.2012] Hinton, G.; Deng, L.; Yu, D.; Dahl, G. E.; Mohamed, A.r.; Jaitly, N.; Senior, A.; Vanhoucke, V.; Nguyen, P.; Sainath, T. N.; et al. 2012. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine 29(6):82–97.
 [Ioffe and Szegedy2015] Ioffe, S., and Szegedy, C. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167.
 [Kipf and Welling2016] Kipf, T. N., and Welling, M. 2016. Semisupervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907.
 [Kuhn et al.2010] Kuhn, M.; Campillos, M.; Letunic, I.; Jensen, L. J.; and Bork, P. 2010. A side effect resource to capture phenotypic effects of drugs. Molecular systems biology 6(1):343.
 [Looks et al.2017] Looks, M.; Herreshoff, M.; Hutchins, D.; and Norvig, P. 2017. Deep learning with dynamic computation graphs. arXiv preprint arXiv:1702.02181.
 [Mayr et al.2016] Mayr, A.; Klambauer, G.; Unterthiner, T.; and Hochreiter, S. 2016. Deeptox: toxicity prediction using deep learning. Frontiers in Environmental Science 3:80.
 [Qi et al.2016] Qi, C. R.; Su, H.; Mo, K.; and Guibas, L. J. 2016. Pointnet: Deep learning on point sets for 3d classification and segmentation. arXiv preprint arXiv:1612.00593.

[Shuman et al.2013]
Shuman, D. I.; Narang, S. K.; Frossard, P.; Ortega, A.; and Vandergheynst, P.
2013.
The emerging field of signal processing on graphs: Extending highdimensional data analysis to networks and other irregular domains.
IEEE Signal Processing Magazine 30(3):83–98.  [Simonovsky and Komodakis2017] Simonovsky, M., and Komodakis, N. 2017. Dynamic edgeconditioned filters in convolutional neural networks on graphs. arXiv preprint arXiv:1704.02901.
 [Vugmeyster, Harrold, and Xu2012] Vugmeyster, Y.; Harrold, J.; and Xu, X. 2012. Absorption, distribution, metabolism, and excretion (adme) studies of biotherapeutics for autoimmune and inflammatory conditions. The AAPS journal 14(4):714–727.
 [Wallach, Dzamba, and Heifets2015] Wallach, I.; Dzamba, M.; and Heifets, A. 2015. Atomnet: a deep convolutional neural network for bioactivity prediction in structurebased drug discovery. arXiv preprint arXiv:1510.02855.
 [Wang and Sun2015] Wang, F., and Sun, J. 2015. Survey on distance metric learning and dimensionality reduction in data mining. Data Mining and Knowledge Discovery 29(2):534–564.
 [Weininger1988] Weininger, D. 1988. Smiles, a chemical language and information system. 1. introduction to methodology and encoding rules. Journal of chemical information and computer sciences 28(1):31–36.
 [Weiss, Torralba, and Fergus2009] Weiss, Y.; Torralba, A.; and Fergus, R. 2009. Spectral hashing. In Advances in neural information processing systems, 1753–1760.
 [Wu et al.2017] Wu, Z.; Ramsundar, B.; Feinberg, E. N.; Gomes, J.; Geniesse, C.; Pappu, A. S.; Leswing, K.; and Pande, V. 2017. Moleculenet: A benchmark for molecular machine learning. arXiv preprint arXiv:1703.00564.
Comments
There are no comments yet.