1 Introduction
Abiological system is a complex network of various molecular entities such as genes, proteins, and other biological molecules linked together by the interactions between these entities. The complex interplay between various molecular entities can be represented as interaction networks with molecular entities as nodes and their interactions as edges. Such a representation of a biological system as a network provides a conceptual and intuitive framework to investigate and understand direct or indirect interactions between different molecular entities in a biological system. Study of such networks lead to systemlevel understanding of biology [8] and discovery of novel interactions including proteinprotein interactions (PPIs) [20], drugdrug interactions (DDIs) [41], drugtarget interactions (DTIs) [21] and genedisease associations (GDIs) [2].
Recently, the generalization of deep learning to the networkstructured data
[6] has shown great promise across various domains such as social networks [30], recommendation systems [37], chemistry [10], citation networks [17]. These approaches are under the umbrella of graph convolutional networks (GCNs). GCNs repeatedly aggregate feature representations of immediate neighbors to learn the informative representation of the nodes for link prediction. Although GCN based methods show great success in biomedical interaction prediction [39, 41], the issue with such methods is that they only consider information from immediate neighbors. SkipGNN [13] leverages skip graph to aggregate feature representations from direct and secondorder neighbors and demonstrated improvements over GCN methods in biomedical interaction prediction. However, SkipGNN cannot be applied to aggregate information from higherorder neighbors and thus fail to capture information that resides farther away from a particular interaction [1].To address the challenge, we propose an endtoend deep graph representation learning framework named higherorder graph convolutional networks (HOGCN) for predicting interactions between pairs of biomedical entities. HOGCN learns a representation for every biomedical entity using an interaction network structure and/or features . In particular, we define a higherorder graph convolution (HOGC) layer where the feature representations from order neighbors are considered to obtain the representation of biomedical entities. The layer can thus learn to mix feature representations of neighbors at various distances for interaction prediction. Furthermore, we define a bilinear decoder to reconstruct the edges in the input interaction network by relying on feature representations produced by HOGC layers. The encoderdecoder approach makes HOGCN an endtoend trainable model for interaction prediction.
We compare HOGCN’s performance with that of stateoftheart network similaritybased methods [19], network embedding methods [27, 12], and graph convolutionbased methods [16, 17, 13] for biomedical link prediction. We experiment with various interaction datasets and show that our method makes accurate and calibrated predictions. HOGCN outperforms alternative methods based on network embedding by up to 30%. Furthermore, HOGCN outperforms graph convolutionbased methods by up to 6%, alluding to the benefits of aggregating information from higherorder neighbors.
We perform a case study on the DDI network and observe that aggregating information from higherorder neighborhood allows HOGCN to learn meaningful representation for drugs. Moreover, literaturebased case studies illustrate that the novel predictions are supported by evidence, suggesting that HOGCN can identify interactions that are highly likely to be a true positive.
In summary, our study demonstrates the ability of HOGCN to identify potential interactions between biomedical entities and opens up the opportunities to use the biological and physicochemical properties of biomedical entities for a followup analysis of these interactions.
2 Related works
With the increasing coverage of the interactome, various networkbased approaches have been proposed to exploit already available interactions to predict missing interactions [3, 14, 18, 19]
. These methods can be broadly classified into (1) network similaritybased methods (2) network embedding methods (3) graph convolutionbased methods. We next summarize these categories of methods for biomedical interaction prediction.
Given a network of known interactions, various similarity metrics are used to measure the similarity between the biomedical entities [3] with an assumption that higher similarity indicates interaction. Triadic closure principle (TCP) has been explored in biomedical interaction prediction with the hypothesis that biomedical entities with common interaction partners are likely to interact with each other [14]. TCP relies on a common neighbor algorithm to count the number of shared neighbors between the nodes and is quantified by where
is the adjacency matrix. Recently, L3 heuristic
[19] shows the common neighbor hypothesis fails for most protein pairs in PPI prediction and proposes to consider nodes that are similar to the neighbors of the nodes and can be quantified by . This indicates that higherorder neighbors are important for interaction prediction.Next, network embedding methods embed the existing networks to lowdimensional space that preserves the structural proximity such that the nodes in the original network can be represented as lowdimensional vectors. Deepwalk
[27] is a popular approach that generates the truncated random walks in the network and defines a neighborhood for each node as a set of nodes within a window of size in each random walk. Similarly, node2vec performs a biased random walk by balancing the breadthfirst and depthfirst search in the network. The random walks generated by these methods can be considered as a combination of nodes from various order of neighborhoods such as 1hop to hop neighborhood. In other words, DeepWalk and node2vec learn the embeddings for the nodes in the network from the combination of where is thepower of the adjacency matrix. These embeddings can then be fed into a classifier to predict the interaction probability. These methods are only limited to the structure of the biomedical networks and cannot incorporate additional information about the biomedical entities. Also, they cannot learn the feature difference between nodes at various distances.
Furthermore, graph convolutionbased methods use a messagepassing mechanism to receive and aggregate information from neighbors to generate representations for the nodes in the network. Graph convolutional networks (GCNs) [16]
and variational graph convolutional autoencoder (VGAE)
[17] aggregate feature representation from immediate neighbors to learn the representation of biomedical entities in an endtoend manner using link prediction objective. These methods are only limited to the average pooling of the neighborhood features [1]. SkipGNN [13] therefore proposes to use skip similarity between the biomedical entities to aggregate information from secondorder neighbors. However, these methods cannot aggregate feature representations from higherorder neighbors and also cannot learn feature differences between neighbors at various distances.3 Preliminaries
A biomedical network is defined as where denotes the set of nodes representing biomedical entities (e.g. proteins, genes, drugs, diseases) and denotes the number of nodes. denotes a set of interactions between biomedical entities. is the features of biomedical entities where is the dimension of features.
Let denote the adjacency matrix of , where indicates an edge between nodes and . We assume the case of binary adjacency matrix where represents the existence of edge between the nodes and , indicating the presence of the experimental evidence for their interaction (i.e. ) or the absence of the experimental evidence for their interaction (i.e. ). Note that the same notation of adjacency matrix can be used to represent weighted graphs such that . Table I shows the notations and their definitions used in the paper.
Notation  Definition 

Graph with nodes , edges and features  
Test edges  
Adjacency matrix of graph  
Degree matrix with  
Identity matrix  
Symmetrically normalized adjacency matrix  
dimensional feature matrix  
Groundtruth interaction between nodes and  
Probability of interaction between nodes and  
Final node embeddings  
Weight matrix for adjacency power for layer  
Number of HOGC layers  
Number of training epochs 

The order of neighborhood  
A set of integer adjacency powers  
Representation of neighbors at distance in layer 
Problem Statement. (Biomedical interaction prediction) Given a biomedical interaction network and the set of potential biomedical interactions , we aim to learn a interaction prediction model to predict the interaction probabilities of , .
3.1 Message Passing
Given a biomedical network , message passing algorithms learn the representation of biomedical entities in the network by aggregating information from immediate neighbors [10]. Additional information about biomedical entities can be used to initialize the feature matrix . These algorithms involve the message passing step in which each biomedical entity sends its current representation to, and aggregates incoming messages from its immediate neighbors. Representation for each biomedical entity can be obtained after steps of message passing and feature aggregation. However, such message passing operation is limited to average pooling of features from immediate neighbors and thus is unable to learn feature differences among neighbors at various distances [1].
Neighborhood nodes at various distances provide network structure information at different granularities [4, 7, 40, 32, 31]. Taking hop neighborhoods into consideration, we aim at aggregating information from various distances at every message passing step. Different powers of adjacency matrices such as provide information about the network structure at different scales. Higherorder message passing operations can therefore learn to mix their representations using various powers of the adjacency matrix at each message passing step.
3.2 Graph Convolutional Networks (GCNs)
Graph convolutional networks (GCNs) are the generalization of convolution operation from regular grids such as images or texts to graph structured data [6, 36]. The key idea of GCNs is to learn the function to generate the node’s representation by repeatedly aggregating information from immediate neighbors. The graph convolutional layer is defined as:
(1) 
where and are the input and output activations, is a trainable weight matrix of the layer , is the elementwise activation, and is a symmetrically normalized adjacency matrix with selfconnections . A GCN model with layers is then defined as:
and can be used to predict the probability of interactions between biomedical entities.
4 Higherorder Graph Convolution Network (HOGCN)
In this work, we develop a higherorder graph convolutional network (HOGCN) that takes an interaction network as input and reconstruct the edges in the interaction network (Fig. 1). HOGCN has two main components:

Encoder: a higherorder graph convolution encoder that operates on an interaction graph and produces representations for biomedical entities by aggregating features from the neighborhood at various distances and

Decoder: a bilinear decoder that relies on these representations to reconstruct the interactions in .
4.1 HigherOrder Graph Encoder
We first describe the higherorder graph encoder, that operates on an undirected interaction graph and learns the representations for biomedical entities.
We develop an encoder with higherorder Graph Convolution (HOGC) layer to mix feature representations from neighbors at distances. Specifically, HOGC layer considers the neighborhood information at different granularities and is defined as:
(2) 
where is a set of integer adjacency powers, denotes the adjacency matrix multiplied times, and denotes columnwise concatenation [1]. Graph convolutional network [16] only considers the power of adjacency matrix and can be exactly recovered by setting in Equation (2). Similarly, SkipGNN [13] considers direct and skip similarity and can be recovered by setting in Equation (2).
Fig. 2 shows a HOGC layer with where is maximum order of neighborhood considered in each HOGC layer. If , HOGC layer only considers the features of the biomedical entities and can capture the feature similarity between various biological entities. This is equivalent to a fully connected network with features of biomedical entities as input. For the HOGC layer, is the identity matrix where is the number of nodes in the network. This allows the HOGC layer to learn the transformation of node features separately and mix it with feature representations from neighbors.
The maximum order of neighborhood and the number of trainable weight matrices , one per each adjacency power, can vary across layers. However, we set the same for neighborhood aggregation and the same dimension for all the weight matrices across all layers.
Neighborhood features from different adjacency powers at layer are columnwise concatenated to obtain feature representation . As shown in Fig 2, weight at layer can learn the arbitrary linear combination of the concatenated features to obtain . Specifically, the layer can assign different coefficients to different columns in the concatenated matrix. For instance, the layer can assign positive coefficients to the columns produced by certain power of and assign negative coefficients to other powers. This allows the model to learn feature differences among neighbors at various distances. We apply HOGC layers to learn the latent representation for biomedical entities in the network, where and is the dimension of node’s representation for each adjacency power.
4.2 Interaction Decoder
We introduced the encoder based on HOGC layers that learns feature representation for biomedical entities by mixing neighborhood information at multiple distances. Next, we discuss the decoder that reconstructs the interactions in based on the representation .
We adopt a bilinear layer to fuse the representation of biomedical entities and and learn the edge representation . More precisely, we define a simple bilinear layer that takes the representation and as input:
(3) 
where represents the learnable fusion matrix, is the representation of edge between nodes and , denotes the bias of the bilinear layer. is nonlinearity.
The edge representation is then fed into 2layered fully connected neural network to predict probability for edge :
(4) 
where denotes fully connected layer with weight and bias , represents the probability that biomedical entities and interact.
So far, we have discussed the encoder and decoder of our proposed approach. Next, we describe the training procedure of our proposed HOGCN model. In particular, we explain how to optimize the trainable neural network weights in an endtoend manner.
4.3 Training HOGCN
During HOGCN training, we employ binary cross entropy loss to optimize the model parameters
(5) 
and encourage the model to assign higher probability to observed interactions than to randomly selected noninteractions. is the predicted interaction probability between and and
denotes the groundtruth interaction label between these nodes. The final loss function considering all interactions is
(6) 
We follow an endtoend approach to jointly optimize over all trainable parameters and backpropagate the gradients through encoder and decoder of HOGCN.
4.3.1 Algorithm
HOGCN leverages biomedical network structure along with additional information about biomedical entities as the initial feature representation . In this paper, we initialize the initial features
to be onehot encoding
i.e. . The feature matrix can be initialized with properties of biomedical entities or pretrained embeddings from other networkbased approaches such as DeepWalk, node2vec.Given an adjacency matrix and the initial node representations , higherorder neighborhood indicated by the higher power of the adjacency matrix is iteratively computed that makes the model more efficient. By adopting righttoleft multiplication, for instance, we can calculate as (Line in Algorithm 1). Representation learned for the neighborhood at distances are concatenated to obtain the representation as shown in Fig. 2 (Line 11 in Algorithm 1). After passing through HOGC layers, we obtain the final representation for biomedical entities. With the final representations and the minibatch of training edges, we retrieve the embeddings for the nodes in training edges and feed them into the interaction decoder to compute their interaction probabilities.
The parameters of HOGCN are optimized with a binary crossentropy loss (Equation (6)) in an endtoend manner. Given two biomedical entities and , the trained model can predict the probability of their interactions.
5 Experimental design
We view the problem of biomedical interaction prediction as solving a link prediction task on an interaction network. We consider various interaction datasets and compare our proposed method with the stateoftheart methods.
5.1 Datasets
We conduct interaction prediction experiments on four publiclyavailable biomedical network datasets:

BioSNAPDTI [42]: DTI network contains 15,139 drugtarget interactions between 5,018 drugs and 2,325 proteins.

BioSNAPDDI [42]: DDI network contains 48,514 drugdrug interactions between 1,514 drugs extracted from drug labels and biomedical literature.

HuRIPPI [20]: HIIII human PPI network contains 5,604 proteins and 23,322 interactions generated by multple orthogonal highthroughput yeast twohybrid screens.

DisGeNETGDI [29]: GDI network consists of 81,746 interactions between 9,413 genes and 10,370 diseases curated from GWAS studies, animal models and scientific literature.
Table II provides summary of datasets used in our experiments. We provide the number of interactions used for training, validation, and testing for each interaction datasets. Also, the table includes the average number of interactions for each biomedical entity which can be computed as .
Dataset  # Nodes  # Edges  Avg. node degree  

Training (70%)  Validation (10%)  Testing (20%)  Total  
DTI  5,018 drugs, 2,325 proteins  10,597  1,514  3,028  15,139  4.12 
DDI  1,514 drugs  33,960  4,852  9,702  48,514  64.09 
PPI  5,604 proteins  16,326  2,332  4,664  23,322  8.32 
GDI  9,413 genes, 10,370 diseases  57,222  8,175  16,349  81,746  8.26 
5.2 Baselines
We compare our proposed model with the following networkbased baselines for interaction prediction:

network similaritybased methods

[leftmargin=*]

L3 [19] counts the number of paths with length3 normalized by the degree for all the node pairs.


Network embedding methods

[leftmargin=*]

DeepWalk [27] performs truncated random walk exploring the network neighborhood of nodes and applies skipgram model to learn the
dimensional embedding for each node in the network. Node features are concatenated to form edge representation and train a logistic regression classifier.

node2vec [12] extends DeepWalk by running biased random walk based on breadth/depthfirst search to capture both local and global network structure.


Graph convolutionbased methods

[leftmargin=*]

VGAE [17] uses graph convolutional encoder with two GCN layers to learn representation for each node in the network and adopts inner product decoder to reconstruct adjacency matrix.

GCN [16] uses normalized adjacency matrix to learn node representations. The representation for nodes are concatenated to form feature representation for the edges and fully connected layer use these edge representation to reconstruct edges, similar to HOGCN. Setting in our proposed HOGCN is equivalent to GCN.

SkipGNN [13] learns the node embeddings by combining direct and skip similarity between nodes. Setting in our proposed HOGCN is equivalent to SkipGNN.

5.3 Experimental setup
We split the interaction dataset into training, validation, and testing interactions in a ratio of 7:1:2 as shown in Table II
. Since the available interactions are positive samples, the negative samples are generated by randomly sampling from the complement set of positive examples. Five independent runs of the experiments with different random splits of the dataset are conducted to report the prediction performance. We use (1) area under the precisionrecall curve (AUPRC) and (2) area under the receiver operating characteristics (AUROC) as the evaluation metrics. With these evaluation metrics, we expect positive interactions to have higher interaction probability compared to negative interactions. So, the higher value of AUPRC and AUROC indicates better performance.
We implement HOGCN using PyTorch
[26] and perform all experiments on a single NVIDIA GeForce RTX 2080Ti GPU. We construct a layered HOGC network with for each layer. At each HOGC layer, the node mixes the feature representations from neighbors at distances and . The dimension of all weight matrices in HOGC layers is set to . All the weight matrices are initialized using Xavier initialization [11]. We train our model using minibatch gradient descent with Adam optimizer [15] for a maximum of 50 epochs, with a fixed learning rate of . We set the minibatch size to 256 and the dropout probability [33] to 0.1 for all layers. Early stopping is adopted to stop training if validation performance does not improve for 10 epochs. The dimension of the edge feature from the bilinear layer is followed by linear layers to project the edge features to edge probabilities. For baseline methods, we follow the same experimental settings discussed in [13].6 Results
In this section, we investigate the performance and flexibility of HOGCN on interaction prediction using four different datasets. We further explore the robustness of HOGCN to sparse networks. Finally, we demonstrate the ability of HOGCN to make novel predictions with literaturebased case studies.
6.1 Biomedical interaction prediction
We compare HOGCN against various baselines on biomedical interaction prediction tasks using four different types of interaction datasets including proteinprotein interactions (PPIs), drugtarget interactions (DTIs), drugdrug interactions (DDIs) and genedisease associations (GDIs).
We randomly mask of interactions from the network as a test set and as a validation set. We train all models with
of interactions and evaluate their performances on test sets. The best set of hyperparameters is selected based on their performances on the validation dataset. Finally, the experiment is repeated for five independent random splits of the interaction dataset and the results with
one standard deviation are reported in Table
III. All of our models used for the reported results are of same capacity (i.e. and ).Dataset  Method  AUPRC  AUROC 

DTI  DeepWalk  0.753 0.008  0.735 0.009 
node2vec  0.771 0.005  0.720 0.010  
L3  0.891 0.004  0.793 0.006  
VGAE  0.853 0.010  0.800 0.010  
GCN  0.904 0.011  0.899 0.010  
SkipGNN  0.928 0.006  0.922 0.004  
24  HOGCN  0.937 0.001  0.934 0.001 
DDI  DeepWalk  0.698 0.012  0.712 0.009 
node2vec  0.801 0.004  0.809 0.002  
L3  0.860 0.004  0.869 0.003  
VGAE  0.844 0.076  0.878 0.008  
GCN  0.856 0.005  0.875 0.004  
SkipGNN  0.866 0.006  0.886 0.003  
24  HOGCN  0.897 0.003  0.911 0.002 
PPI  DeepWalk  0.715 0.008  0.706 0.005 
node2vec  0.773 0.010  0.766 0.005  
L3  0.899 0.003  0.861 0.003  
VGAE  0.875 0.004  0.844 0.006  
GCN  0.909 0.002  0.907 0.006  
SkipGNN  0.921 0.003  0.917 0.004  
24  HOGCN  0.930 0.002  0.922 0.001 
GDI  DeepWalk  0.827 0.007  0.832 0.003 
node2vec  0.828 0.006  0.834 0.003  
L3  0.899 0.001  0.832 0.001  
VGAE  0.902 0.006  0.873 0.009  
GCN  0.909 0.002  0.906 0.006  
SkipGNN  0.915 0.003  0.912 0.004  
24  HOGCN  0.941 0.001  0.936 0.001 
Table III shows that HOGCN achieves huge improvement over network embedding methods such as DeepWalk and node2vec across all datasets. Specifically, HOGCN outperforms Deepwalk on AUPRC by 24.44% in DTI, 28.51% in DDI, 30.07% in PPI, and 13.79% in GDI. Although node2vec achieves better performance compared to DeepWalk by adopting a biased random walk, HOGCN still outperforms node2vec by a significant margin. DeepWalk and node2vec consider different orders of neighborhood defined by the window size and learns similar representations for the nodes in that window. In contrast, HOGCN learns feature differences between neighbors at various distances to obtain feature representation for the node and thus achieves superior performance. The improved performance suggests that feature differences between different order neighbors provide important information for interaction prediction.
A network similaritybased method, L3 [19] outperforms network embedding methods across four datasets but is limited to a single aspect of network similarity i.e. the number of paths of length connecting two nodes. So, L3 cannot be applied when other similarities between nodes such as similarity in features and common neighbors at various distances need to be considered. HOGCN overcomes these limitations and outperforms L3 across all interaction datasets with huge gain. In particular, HOGCN gains 3.5% AUPRC and 7.09% AUROC on PPI over L3 [19], which recently outperformed 20 network science methods in the PPI prediction problem.
Graph convolutionbased methods such as GCN and VGAE achieves significant improvements over network embedding approaches but achieves comparable performance with L3. SkipGNN shows improvement over all other methods by incorporating skip similarity to aggregate information from secondorder neighbors. Moreover, HOGCN with achieves an improvement over all graph convolutionbased methods. Specifically, HOGCN achieves improvement in AUPRC over VGAE [17] by 6.3%, GCN [16] by 4.8% and SkipGNN [13] by 3.6% on DDI dataset.
As HOGCN can learn the linear combination of node features at multiple distances, it can extract meaningful representations from the interaction networks. The results in Table III demonstrate that our approach with higherorder neighborhood mixing outperforms the stateoftheart methods on real interaction datasets.
6.2 Exploration of HOGCN’s drug representations
Next, we evaluate if HOGCN learns meaningful representation when feature representations of higherorder neighbors are aggregated. To this aim, we train GCN, SkipGNN, and HOGCN models on the DDI network to obtain the drug representations . The learned drug representations are mapped to 2D space using tSNE [22] and visualize them in Fig. 3.
Drugbank [35] provides information about drugs and their categories based on different characteristics such as involved metabolic enzymes, class of drugs, side effects of drugs, and the like. For this experiment, we collect drug categories from Drugbank and limit the selection of drug categories such that the training set doesn’t contain any interactions between the drugs in the same category. The selected drug categories are ACE Inhibitors and Diuretics (DBCAT002175), Penicillins (DBCAT000702), and Antineoplastic Agents (DBCAT000592) with 10, 24, and 16 drugs respectively. Although these drugs don’t have direct interactions in the training set, we assume that these drugs share neighborhoods at various distances and can be explored accordingly with HOGCN.
Fig. 3 shows the clustering structure in drugs’ representations as neighborhood information at multiple distances are considered. Examining the figure, we observe that drugs in the same category are embedded close to each other in the 2D space when the model aggregates information from farther neighbors. For example, 24 drugs in the Penicillins (DBCAT000702) category (marked with blue triangles in Fig. 3) are scattered in the representation space learned by GCN that only considers feature aggregation from immediate neighbors (Fig. (a)a). Note that these drugs don’t have any direct interaction between themselves in the training set. Since GCNbased models can only average the representation from immediate neighbors, these drugs are mapped relatively farther to each other and closer to other interacting drugs. SkipGNN considers skip similarity to aggregate features from secondorder neighbors and show relatively compact clusters compared to GCNs (Fig. (b)b). On the other hand, HOGCN considers the higherorder neighborhood and learns similar representations for drugs that belong to the same category demonstrated by compact clustering structure in Fig. (c)c even though no information about categorical similarity is provided to the model. This analysis demonstrates that HOGCN learns meaningful representation for drugs by aggregating feature representations from the neighborhood at various distances.
Next, we test if the clustering pattern in Fig. 3 holds across many drug categories. With this aim, we consider all drug categories in DrugBank and compute the average Euclidean distance between each drug’s representation and representations of other drugs within the same drug category. We then perform 2sample Kolmogorov–Smirnov tests and found that HOGCN learns significantly more similar representations of drugs than expected by chance (pvalue = ), GCNs (pvalue = ) and SkipGNN (). Thus, this analysis indicates that HOGCN learns meaningful representations for drugs by aggregating neighborhood information at various distances.
6.3 Robustness to network sparsity
We next explore the robustness of networkbased interaction prediction models to network sparsity. To this aim, we evaluate the performance with respect to the percentage of training edges varying from 10% to 70%. We make predictions on the rest of the interactions. We further use 10% of test edges for validation to select the best hyperparameter settings. For a fair comparison, we compare with graph convolutionbased methods that aggregate information from direct and/or secondorder neighbors.
Fig. 4 shows the robustness of HOGCN to network sparsity. HOGCN achieves strong performance in all tasks with different network sparsity. The performance of HOGCN steadily improves with the increase in training edges. The mixing of features from a higherorder neighborhood in HOGCN and SkipGNN shows improvement over GCN and VGAE that only consider direct neighbors. Since HOGCN can learn the linear combination of features from a 3hop neighborhood for this experiment, it shows improvement over SkipGNN in almost all cases. This demonstrates that features from farther distances are informative for interaction prediction in sparse networks.
6.4 Calibrating model’s prediction
All graph convolutionbased model proposed for biomedical link prediction predicts the confidence estimate
for interaction between two biomedical entities and . We thus test if a predicted confidence represents the likelihood of being true interaction. In other words, we expect the confidence estimate to be calibrated, i.e. represents true interaction probability [24]. For example, given 100 predictions, each with the confidence of 0.9, we expect that 90 interactions should be correctly classified as true interactions.To evaluate the calibration performance, we use reliability diagrams [24] and Brier score [5]. In particular, the reliability diagram provides a visual representation of model calibration. These diagrams plot the expected fraction of positives as a function of predicted probability [24]. A model with perfectly calibrated predictions is represented by a diagonal in Fig. 5. In addition to reliability diagrams, it is more convenient to have a scalar summary statistics of calibration. Brier score [5] is a proper scoring rule for measuring the accuracy of predicted probabilities. Lower Brier score indicates better calibration of a set of predictions. It is computed as the mean squared error of a predicted probability and the groundtruth interaction label . Mathematically, the Brier score can be computed as:
(7) 
where denotes the number of test edges.
Fig. 5 shows the calibration plots for GCN, SkipGNN and HOGCN (). For DTI dataset, SkipGNN show better calibration compared to GCN and HOGCN (Fig. (a)a), indicating that secondorder neighborhood information is appropriate and aggregating features from farther away makes model overconfident. For other datasets, GCNs are relatively overconfident for all predicted confidence. For example, approximately of interactions are true positives among the interactions with high predicted confidence in PPI (Fig. (c)c) and GDI dataset (Fig. (d)d). In contrast, HOGCN achieves a lower Brier score in comparison to the GCN and SkipGNN across DDI, PPI, and GDI datasets, alluding to the benefits of aggregating higherorder neighborhood features for calibrated prediction. This analysis demonstrates that HOGCN with higherorder neighborhood mixing makes accurate and calibrated predictions for biomedical interaction.
6.5 Impact of higherorder neighborhood mixing
In Section 6.3, we contrast HOGCN’s performance with that of alternative graph convolutionbased methods in varying fraction of edges. In this experiment, we aim to observe the performance of HOGCN when the order is increased to allow the model to aggregate neighborhood information from farther away. We follow a similar setup as discussed in 6.3.
Fig. 6 shows the comparison of HOGCN with higherorder neighborhood mixing . The prediction performance of HOGCN improves with the increase in the number of training interactions for all cases. The results show that HOGCN’s performances are not sensitive to the hyperparameter settings of for all datasets since for settings , we achieve comparable performances across the datasets. This analysis indicates that the 3hop neighborhood provides sufficient information for interaction prediction across all datasets and the performance remains stable even with a large value for .
6.6 Investigation of novel predictions
Next, we perform the literaturebased validation of novel predictions. Our goal is to evaluate the quality of HOGCN’s novel predictions compared to that of GCN and SkipGNN and show that HOGCN predicts novel interactions with higher confidence. We consider GDIs and DDIs for this evaluation.
We first evaluate the potential of the HOGCN to make novel GDI predictions. We collect 1,134,942 GDIs and their scores from DisGeNET [29]. The score corresponds to the number and types of sources and the number of publications supporting the associations. With the score threshold of , we obtain 17,893 new GDIs that are not in the training set. We make predictions on these 17,893 GDIs with GCN, SkipGNN, and HOGCN (). Out of 17,893 GDIs, HOGCN predicts a higher probability than GCN for 17,356 (96.99%) GDIs and than SkipGNN for 11,418 (63.8%) GDIs. Table IV shows the top 5 GDIs with a significant increase in interaction probabilities when higherorder neighborhood mixing is considered. We also provide the number of evidence from DisGenNet [29] to support these predictions. Improvement in predicted probabilities by HOGCN models shows that aggregating feature representations from higherorder neighbors make HOGCN more confident about the potential interactions as discussed in Section 6.4.
Probability  No. of  

Gene  Disease  Evidence  
PTGER1  Gastric ulcer  0.087  0.519  0.721  1 
ANGPT1  Gastric ulcer  0.173  0.583  0.657  2 
ABO  Pancreatic carcinoma  0.265  0.615  0.770  26 
VCAM1  Endotoxemia  0.294  0.529  0.639  5 
GPC3  Hepatoblastoma  0.307  0.540  0.598  17 
We select two predicted GDIs with a large number of supporting evidence and investigate the reason for the improvement in predicted confidence with HOGCN. Specifically, we choose genedisease pairs (a) ABO and Pancreatic carcinoma (26 pieces of evidence) and (b) GPC3 and Hepatoblastoma ((17 pieces of evidence). To explain the prediction, the subnetworks containing all shortest paths between these pairs are selected. In particular, there are 49 shortest paths of length 3 between ABO and Pancreatic carcinoma including 6 diseases and 15 genes (Fig. (a)a). Similarly, there are 20 shortest paths of length 3 between GPC3 and Hepatoblastoma including 6 diseases and 9 genes (Fig. (b)b). Since these nodes are 3hop away from each other and GCNs can only consider immediate neighbors, GCNs assign low confidence to these interactions.
Examining the subnetwork in Fig. (a)a, we found that most of the diseases are related to a cancerous tumor in the pancreas and the prostate. Furthermore, pancreatic carcinoma is associated with other diseases such as Pancreatic neoplasm, malignant neoplasm of pancreas, and malignant neoplasm of prostate [29]. Since ABO is linked with diseases that are related to pancreatic carcinoma and other genes are related to these diseases as well, HOGCN captures such association (Fig (a)a) even though they are farther away in the network. Similarly, HOGCN predicts association for GPC3 and Hepatoblastoma (Fig. (b)b).
Next, we perform a similar case study for DDIs and evaluate the predictions against DrugBank [35]. For this experiment, we make a prediction for every drug pair with GCN, SkipGNN and HOGCN and exclude the interactions that are already in the training set. Table V shows the top 5 interactions with an increase in interaction probabilities when highers orders of the neighborhood are considered. As discussed in Section 6.4, HOGCN makes predictions with higher confidence compared to GCN and SkipGNN for the interactions that are likely to be a true positive.
Drug 1  Drug 2  Probability  

Evidence  
Nelfinavir  Acenocoumarol  0.192  0.318  0.417  [9] 
Praziquantel  Itraconazole  0.609  0.721  0.811  [28] 
Cisapride  Droperidol  0.618  0.725  0.823  [23] 
Dapsone  Warfarin  0.632  0.720  0.885  [34] 
Levofloxacin  Tobramycin  0.663  0.760  0.823  [25] 
Moreover, we validate the false positive DDI predictions of GCNs and investigate the subnetwork for these drugs in DDI networks to reason the predictions. Table VI shows the top 5 interactions with a significant decrease in predicted confidence compared to GCNbased models. Since these DDIs are false positives [35], GCNbased models make overconfident predictions for such DDIs. In contrast, HOGCN significantly reduces the predicted confidence for these DDIs to be true positive, indicating that the higherorder neighborhood allows HOGCN to identify false positive predictions. In particular, HOGCN can identity false positive DDI between Belimumab and Estazolam even though they are 3hop away from each other.
Drug 1  Drug 2  Probability with  

Tranylcypromine  Melphalan  0.925  0.478  0.065 
Belimumab  Estazolam  0.912  0.477  0.178 
Methotrimeprazine  Cloxacillin  0.907  0.406  0.065 
Hydrocodone  Melphalan  0.905  0.193  0.012 
Ibrutinib  Mecamylamine  0.899  0.398  0.353 
We select subnetwork involving the drugs to investigate the reason for such predictions. Fig. 8 shows the subnetwork with all shortest paths between the drugs in Table VI. Examining the figure, we observe that the drugs in these false positive DDIs have common immediate neighbors for all cases. GCN makes wrong predictions for these DDIs with high confidence. However, SkipGNN becomes less confident about the interaction being true positive by considering the skip similarity. HOGCN further reduces the predicted confidence for Tranylcypromine and Melphalan to 0.065, indicating that there is no association between these drugs.
These case studies show that HOGCN with higherorder neighborhood mixing not only provide information for the identification of novel interactions but also help HOGCN to reduce false positive predictions.
7 Conclusion
We present a novel deep graph convolutional network (HOGCN) for biomedical interaction prediction. Our proposed model adopts a higherorder graph convolutional layer to learn to mix the feature representation of neighbors at various scales. Experimental results on four interaction datasets demonstrate the superior and robust performance of the proposed model. Furthermore, we show that HOGCN makes accurate and calibrated predictions by considering higherorder neighborhood information.
There are several directions for future study. Our approach only considers the known interactions to flag potential interactions. There are other sources of biomedical information such as various physicochemical and biological properties of biomedical entities that can provide additional information about the interaction and we plan to investigate the integration of such features into the model. As HOGCN aggregates the neighborhood information at various distances and can flag novel interactions, it would be interesting to provide interpretable explanations for the predictions in the form of a small subgraph of the input interaction network that are most influential for the predictions [38].
Acknowledgments
This work was supported by the NSF [NSF1062422 to A.H.], [NSF1850492 to R.L.] and the NIH [GM116102 to F.C.]
References

[1]
(2019)
MixHop: higherorder graph convolutional architectures via sparsified neighborhood mixing.
In
International Conference on Machine Learning
, pp. 21–29. Cited by: §1, §2, §3.1, §4.1.  [2] (2018) Largescale analysis of disease pathways in the human interactome.. In PSB, pp. 111–122. Cited by: §1.
 [3] (2020) Missing link prediction using common neighbor and centrality based parameterized algorithm. Scientific Reports 10 (1), pp. 1–9. Cited by: §2, §2.
 [4] (2018) Simplicial closure and higherorder link prediction. Proceedings of the National Academy of Sciences 115 (48), pp. E11221–E11230. Cited by: §3.1.
 [5] (1950) Verification of forecasts expressed in terms of probability. Monthly weather review 78 (1), pp. 1–3. Cited by: §6.4.
 [6] (2017) Geometric deep learning: going beyond euclidean data. IEEE Signal Processing Magazine 34 (4), pp. 18–42. Cited by: §1, §3.2.
 [7] (2015) Grarep: learning graph representations with global structural information. In Proceedings of the 24th ACM international on conference on information and knowledge management, pp. 891–900. Cited by: §3.1.
 [8] (2017) Network propagation: a universal amplifier of genetic associations. Nature Reviews Genetics 18 (9), pp. 551. Cited by: §1.
 [9] (1999) Sequential interaction of ritonavir and nelfinavir with acenocoumarol (abstract 1069). In Seventh European Conference on Clinical Aspects and Treatment of HIV Infection. Lisbon, Portugal, Cited by: TABLE V.
 [10] (2017) Neural message passing for quantum chemistry. In Proceedings of the 34th International Conference on Machine LearningVolume 70, pp. 1263–1272. Cited by: §1, §3.1.

[11]
(2010)
Understanding the difficulty of training deep feedforward neural networks.
In
Proceedings of the thirteenth international conference on artificial intelligence and statistics
, pp. 249–256. Cited by: §5.3.  [12] (2016) Node2vec: scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 855–864. Cited by: §1, 2nd item.
 [13] (2020) SkipGNN: predicting molecular interactions with skipgraph networks. arXiv preprint arXiv:2004.14949. Cited by: §1, §1, §2, §4.1, 3rd item, §5.3, §6.1.
 [14] (2016) Predicting protein–protein interactions from the molecular to the proteome level. Chemical reviews 116 (8), pp. 4884–4909. Cited by: §2, §2.
 [15] (2015) Adam (2014), a method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations (ICLR), arXiv preprint arXiv, Vol. 1412. Cited by: §5.3.
 [16] (2016) Semisupervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907. Cited by: §1, §2, §4.1, 2nd item, §6.1.
 [17] (2016) Variational graph autoencoders. arXiv preprint arXiv:1611.07308. Cited by: §1, §1, §2, 1st item, §6.1.
 [18] (2019) GNE: a deep learning framework for gene network inference by aggregating biological information. BMC systems biology 13 (2), pp. 38. Cited by: §2.
 [19] (2019) Networkbased prediction of protein interactions. Nature communications 10 (1), pp. 1–8. Cited by: §1, §2, §2, 1st item, §6.1.
 [20] (2020) A reference map of the human binary protein interactome. Nature 580 (7803), pp. 402–408. Cited by: §1, 3rd item.
 [21] (2017) A network integration approach for drugtarget interaction prediction and computational drug repositioning from heterogeneous information. Nature communications 8 (1), pp. 1–13. Cited by: §1.
 [22] (2008) Visualizing data using tsne. Journal of machine learning research 9 (Nov), pp. 2579–2605. Cited by: Fig. 3, §6.2.
 [23] (2000) Drug interactions with cisapride. Clinical pharmacokinetics 39 (1), pp. 49–75. Cited by: TABLE V.

[24]
(2005)
Predicting good probabilities with supervised learning
. In Proceedings of the 22nd international conference on Machine learning, pp. 625–632. Cited by: §6.4, §6.4.  [25] (2009) Postantibiotic effect of levofloxacin and tobramycin alone or in combination with cefepime against pseudomonas aeruginosa. Chemotherapy 55 (6), pp. 446–450. Cited by: TABLE V.
 [26] (2019) Pytorch: an imperative style, highperformance deep learning library. In Advances in neural information processing systems, pp. 8026–8037. Cited by: §5.3.
 [27] (2014) Deepwalk: online learning of social representations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 701–710. Cited by: §1, §2, 1st item.
 [28] (2006) Clinically relevant drug interactions with antiepileptic drugs. British journal of clinical pharmacology 61 (3), pp. 246–255. Cited by: TABLE V.
 [29] (2020) The disgenet knowledge platform for disease genomics: 2019 update. Nucleic acids research 48 (D1), pp. D845–D855. Cited by: 4th item, §6.6, §6.6, TABLE IV.
 [30] (2018) DeepInf: social influence prediction with deep learning. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 2110–2119. External Links: ISBN 9781450355520 Cited by: §1.
 [31] (2018) HONE: higherorder network embeddings. arXiv preprint arXiv:1801.09303. Cited by: §3.1.
 [32] (2018) Higherorder network representation learning. In Companion Proceedings of the The Web Conference 2018, WWW ’18, Republic and Canton of Geneva, CHE, pp. 3–4. External Links: ISBN 9781450356404, Link, Document Cited by: §3.1.
 [33] (2014) Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research 15 (1), pp. 1929–1958. Cited by: §5.3.
 [34] (2012) Probable warfarin and dapsone interaction. Clinical and Applied Thrombosis/Hemostasis 18 (1), pp. 107–109. Cited by: TABLE V.
 [35] (2006) DrugBank: a comprehensive resource for in silico drug discovery and exploration. Nucleic acids research 34 (suppl_1), pp. D668–D672. Cited by: §6.2, §6.6, §6.6.
 [36] (2020) A comprehensive survey on graph neural networks. IEEE Transactions on Neural Networks and Learning Systems. Cited by: §3.2.

[37]
(2018)
Graph convolutional neural networks for webscale recommender systems
. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 974–983. External Links: ISBN 9781450355520 Cited by: §1.  [38] (2019) Gnnexplainer: generating explanations for graph neural networks. In Advances in neural information processing systems, pp. 9244–9255. Cited by: §7.
 [39] (2020) Graph embedding on biomedical networks: Methods, applications and evaluations. Bioinformatics 36 (4), pp. 1241–1251. External Links: Document, 1906.05017, ISSN 14602059 Cited by: §1.
 [40] (2018) Highorder proximity preserved embedding for dynamic networks. IEEE Transactions on Knowledge and Data Engineering 30 (11), pp. 2134–2144. Cited by: §3.1.
 [41] (2018) Modeling polypharmacy side effects with graph convolutional networks. Bioinformatics 34 (13), pp. i457–i466. Cited by: §1, §1.
 [42] (2018) BioSNAP datasets: stanford biomedical network dataset collection. http://snap.stanford.edu/biodata. Cited by: 1st item, 2nd item.