Hierachial Protein Function Prediction with Tails-GNNs

07/24/2020 ∙ by Stefan Spalević, et al. ∙ 11

Protein function prediction may be framed as predicting subgraphs (with certain closure properties) of a directed acyclic graph describing the hierarchy of protein functions. Graph neural networks (GNNs), with their built-in inductive bias for relational data, are hence naturally suited for this task. However, in contrast with most GNN applications, the graph is not related to the input, but to the label space. Accordingly, we propose Tail-GNNs, neural networks which naturally compose with the output space of any neural network for multi-task prediction, to provide relationally-reinforced labels. For protein function prediction, we combine a Tail-GNN with a dilated convolutional network which learns representations of the protein sequence, making significant improvement in F_1 score and demonstrating the ability of Tail-GNNs to learn useful representations of labels and exploit them in real-world problem solving.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Knowing the function of a protein informs us on its biological role in the organism. With large numbers of genomes being sequenced every year, there is a rapidly growing number of newly discovered proteins. Protein function is most reliably determined in wet lab experiments, but current experimental methods are too slow for such quick income of novel proteins. Therefore, the development of tools for automated prediction of protein functions is necessary. Fast and accurate prediction of protein function is especially important in the context of human diseases since many of them are associated with specific protein functions.

The space of all known protein functions is defined by a directed acyclic graph known as the Gene Ontology (GO) (Ashburner et al., 2000), where each node represents one function and each edge encodes a hierarchical relationship between two functions, such as is-a or part-of (refer to Figure 2 for a visualisation). For every protein, its functions constitute a subgraph of GO, consistent in the sense that it is closed with respect to the predecessor relationship. GO contains thousands of nodes, with function subgraphs usually having dozens of nodes for each protein. Hence, the output of the protein function prediction problem is a subgraph of a hierarchically-structured graph.

This opens up a clear path of application for graph representation learning (Bronstein et al., 2017; Hamilton et al., 2017b; Battaglia et al., 2018), especially graph neural networks (GNNs) (Kipf and Welling, 2016; Veličković et al., 2017; Gilmer et al., 2017; Corso et al., 2020), given their natural inductive bias towards processing relational data.

One key aspect in which the protein function prediction task differs from most applications of graph representation learning, however, is in the fact that the graph is specified in the label space—that is, we are given a multilabel classification task in which we have known relational inductive biases over the individual labels (e.g. if protein has function , it must also have all predecessor functions of under the closure constraint).

Driven by the requirement for a GNN to operate in the label space, we propose Tail-GNN, a graph neural network which learns representations of labels, introducing relational inductive biases into the flat label predictions of a feedforward neural network. Our results demonstrate that introducing this inductive bias provides significant gains on the protein function prediction task, paving the way to many other possible applications in the sciences (e.g., prediction of spatial phenomena over several correlated locations (Radosavljevic et al., 2010; Djuric et al., 2015)

, traffic state estimation

(Djuric et al., 2011), and polypharmacy side effect prediction (Zitnik et al., 2018; Deac et al., 2019a)).

2 Tail-GNNs

Figure 1: A high-level overview of the protein function modelling setup in this paper. Proteins are represented using their amino acid sequences (), and are passed through the labelling network (

), to compute latent vectors for each label (

). These latent vectors are passed to the Tail-GNN (), which repeatedly aggregates their information along the edges of the gene ontology graph, computing an updated latent representation of each label (

). Finally, a linear layer predicts the probability of the protein having the corresponding functions (

). The labelling network relies on dilated convolutions followed by global average pooling and reshaping. Note how dilated convolutions allow for an exponentially increasing receptive field at each amino acid.

In this section, we will describe an abstract model which takes advantage of a Tail-GNN, followed by an overview and intuition for the specific architectural choices we used for the protein prediction task. The entire setup from this section may be visualised in Figure 1.

Generally, we have a multi-label prediction task, from inputs , to outputs , for each label . We are also aware that there exist relations between labels, which we explicitly encode using a binary adjacency matrix , such that implies that the prediction for label can be related111Note that different kinds of entries in are also allowed, in case we would like to explicitly account for edge features. with the prediction for label .

Our setup consists of a labeller network

(1)

which attaches latent vectors , to each label , for a given input . Typically, these will be -dimensional real-valued vectors, i.e. .

These labels are then provided to the Tail-GNN layer , which is a node-level predictor; treating each label as a node in a graph, as its corresponding node features, and as its corresponding adjacency matrix, it produces a prediction for each node:

(2)

That is, , provides the final predictions for the model in each label. As implied, the Tail-GNN is typically implemented within the graph neural network (Scarselli et al., 2008) framework, explicitly including the relational information.

Assuming and are differentiable w.r.t. their parameters, the entire system can be end-to-end optimised via gradient descent on the label errors w.r.t. ground-truth values.

In our specific case, the inputs

are protein sequences of one-hot encoded amino acids, and outputs

are binary labels indicating presence or absence of individual functions for those proteins.

Echoing the protein modelling results of Fast-Parapred (Deac et al., 2019b), we have used a deep dilatedconvolutional neural network for (similarly as in ByteNet (Kalchbrenner et al., 2016) and WaveNet (Oord et al., 2016)). This architecture provides a parallelisable way of modelling amino-acid sequences without sacrificing performance compared to RNN encoders. This labelling network is fully convolutional (Springenberg et al., 2014): it predicts latent features for each amino acid, followed by global average pooling and reshaping the output to obtain a length- vector for each label.

Figure 2: Representation of a function subgraph on a small subset of the ontology we leveraged. Assume that the input protein has three functions: RNA binding, signaling receptor binding and protein kinase binding. Its function subgraph contains all predecessors of these functions (e.g. nucleic acid binding, enzyme binding, binding). Note that, as we go deeper in the ontology, the functions associated with the nodes become more specialized.

As we know that the gene ontology edges encode explicit containment relations between function labels, our Tail-GNN is closely related to the GCN model (Kipf and Welling, 2016). At each step, we update latent features in each label by aggregating neighbourhood features across edges:

(3)

where is the one-hop neighbourhood of label in the GO,

is a shared weight matrix parametrising a linear transformation in each node, and

is a coefficient of interaction from node to node , for which we attempt several variants: sum-pooling (Xu et al., 2018) (), mean-pooling (Hamilton et al., 2017a) (), and graph attention (, where is an attention function producing scalar coefficients). We use the same attention mechanism as used in GAT (Veličković et al., 2017).

Lastly, we also attempt to explicitly align with the containment inductive bias by leveraging max-pooling:

(4)

where is performed elementwise.

The final layer of our network is a shared linear layer, followed by a logistic sigmoid activation. It takes the latent label representations produced by Tail-GNN and predicts a scalar value for each label, indicating the probability of the protein having the corresponding function. We optimise the entire network end-to-end using binary cross-entropy on the ground-truth functions.

It is interesting to note that, performing constrained relational computations in the label space, the operation of the Tail-GNN can be closely related to conditional random fields (CRFs) (Lafferty et al., 2001; Krähenbühl and Koltun, 2011; Cuong et al., 2014; Belanger and McCallum, 2016; Arnab et al., 2018). CRFs have been combined with GNNs in prior work (Ma et al., 2018; Gao et al., 2019), primarily as a means of strengthening the GNN prediction; in our work, we express all computations using GNNs alone, relying on the fact that, if optimal, Tail-GNNs could learn to specialise to the computations of the CRF through neural execution (Veličković et al., 2019), but will in principle have an opportunity to learn more data-driven rules for message passing between different labels.

Further, Tail-GNNs share some similarities with gated propagation networks (GPNs) (Liu et al., 2019), which leverage class relations to compute class prototypes for meta-learning (Snell et al., 2017)

. While both GPNs and Tail-GNNs perform GNN computations over a graph in the label space, the aim of GPNs is to compute structure-informed prototypes for a 1-NN classifier, while here we focus on multi-task predictions and directly produce outputs in an end-to-end differentiable fashion.

Beyond operating in the label space, GNNs have seen prior applications to protein function modelling through explicitly taking into account either the protein’s residue contact map (Gligorijevic et al., 2019) or existing protein-protein interaction (PPI) networks. Especially, Hamilton et al. (2017a) provide the first study of explicitly running GNNs over PPI graphs in order to predict gene ontology signatures (Zitnik and Leskovec, 2017). However, as these models rely on an existence of either a reliable contact map or PPI graph, they cannot be reliably used to predict functions for novel proteins (for which these may not yet be known). Such information, if assumed available, may be explicitly included as a relational component within the labeller network.

3 Experimental Evaluation

3.1 Dataset

We used training sequences and functional annotations from CAFA3, a protein function prediction challenge (Zhou et al., 2019). The functional annotations were represented by functional terms of the hierarchical structure of the Gene Ontology (GO) (Ashburner et al., 2000)—the version released in April 2020. Out of the three large groups of functions represented in GO, we used the Molecular Function Ontology (MFO) which contains 11,113 terms. Function subgraphs for each protein were obtained by propagating functional annotations to the root. We discarded obsolete nodes and functions occurring in less than 500 proteins in the original dataset, obtaining a reduced ontology with 123 nodes and 145 edges. Next, we eliminated proteins whose function subgraph contained only the root node (which is always active), as well as proteins longer than 1,000 amino acids.

All of the above constraints were devised with the aim of keeping the downstream task relevant, while at the same time simpler for the dilated convolutions to model—delegating most of the subsequent representational effort to the Tail-GNN. The final dataset contains 31,243 proteins, with an average sequence length of 431 amino acids. Average number of protein functions per protein is 7.

3.2 Training specifics

The dataset was randomly split into training/validation/test sets, with a rough proportion of 68:17:15 percent. We counted up the individual label occurrences within these datasets, observing that the split was appropriately stratified across all of them. The time of characterization of protein function was not taken into account since the aim was to examine whether GNN method is able to cope with structural labels.

The architectural hyperparameters were determined based on the validation set performance, using the

score—a suitable measure for imbalanced label problems, which is also commonly used for evaluating models in CAFA challenges (Zhou et al., 2019). Via thorough hyperparameter sweeps, we decided on a labelling network of six dilated convolutional layers, with exponentially increasing dilation rate. Initially the individual amino acids are embedded into 16 features, and the individual layers compute features each, mirroring the results of Deac et al. (2019b).

For predicting functions directly from the labelling network, we follow with a linear layer of features and global average pooling across amino acid positions, predicting the probability of each function occurring.

When pairing with Tail-GNN, however, the linear layer computes features, with being the number of latent features computed per label (i.e. the dimensionality of the vectors). We swept various small222Further increasing quickly leads to an increase in parameter count, leading to overfitting and memory issues. values of , finding to perform optimally.

In addition, we concatenate five spectral

features to each input node to the Tail-GNN, in the form of the five eigenvectors corresponding to the five largest eigenvalues of the graph Laplacian—inspired by the Graph Fourier Transform of

Bruna et al. (2013).

For each choice of Tail-GNN aggregation, we evaluated one and two GNN layers of features each, followed by a linear classifier for protein functions. We also assessed performance without incorporating the spectral features.

All models are optimising the binary cross-entropy on the function predictions using the Adam SGD optimiser (Kingma and Ba, 2014) (with learning rate and batch size of ), incorporating class weights to account for any imbalance. We train for epochs with early stopping on the validation , with a patience of epochs.

3.3 Results

We evaluate the recovered optimised models across five random seeds. Results are given in Table 1; the labelling network is the baseline dilated convolutional network without leveraging GNNs. Additionally, we provide results across a variety of Tail-GNN configurations. Our results are consistent with the top-10 performance metrics in the CAFA3 challenge (Zhou et al., 2019) but the direct comparison was not possible since we use a reduced ontology.

Our results demonstrate a significant performance gain associated with appending Tail-GNN to the labelling network, specifically, when using the sum aggregator. While less aligned to the containment relation than maximisation, summation is also more “forgiving” with respect to any labelling mistakes: if Tail-GNN-max had learnt to perfectly implement containment, any mistakenly labelled leaves would cause large chunks of the ontology to be misclassified.

Further, we discover a performance gain associated with including the Laplacian eigenvectors: including them as node features, and a low-frequency indicator of global graph features, further improves the results of the Tail-GNN-sum.

While much of our analysis was centered around the protein function prediction task, we conclude by noting that the way Tail-GNNs are defined is task-agnostic, and could easily see application in other areas of the sciences (as discussed in the Introduction), with minimal modification to the setup.

Model Validation Test
Labelling network
Tail-GNN-mean
Tail-GNN-GAT
Tail-GNN-max
Tail-GNN-sum
Tail-GNN-sum
(no spectral fts.)
Table 1: Values of score on our validation and test datasets for all considered architectures, aggregated over five random seeds.

References

  • A. Arnab, S. Zheng, S. Jayasumana, B. Romera-Paredes, M. Larsson, A. Kirillov, B. Savchynskyy, C. Rother, F. Kahl, and P. H. S. Torr (2018)

    Conditional random fields meet deep neural networks for semantic segmentation: combining probabilistic graphical models with deep learning for structured prediction

    .
    IEEE Signal Processing Magazine 35 (1), pp. 37–52. Cited by: §2.
  • M. Ashburner, C. A. Ball, J. A. Blake, D. Botstein, H. Butler, J. M. Cherry, A. P. Davis, K. Dolinski, S. S. Dwight, J. T. Eppig, et al. (2000) Gene ontology: tool for the unification of biology. Nature genetics 25 (1), pp. 25–29. Cited by: §1, §3.1.
  • P. W. Battaglia, J. B. Hamrick, V. Bapst, A. Sanchez-Gonzalez, V. Zambaldi, M. Malinowski, A. Tacchetti, D. Raposo, A. Santoro, R. Faulkner, et al. (2018) Relational inductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261. Cited by: §1.
  • D. Belanger and A. McCallum (2016) Structured prediction energy networks. In ICML, Cited by: §2.
  • M. M. Bronstein, J. Bruna, Y. LeCun, A. Szlam, and P. Vandergheynst (2017) Geometric deep learning: going beyond euclidean data. IEEE Signal Processing Magazine 34 (4), pp. 18–42. Cited by: §1.
  • J. Bruna, W. Zaremba, A. Szlam, and Y. LeCun (2013) Spectral networks and locally connected networks on graphs. arXiv preprint arXiv:1312.6203. Cited by: §3.2.
  • G. Corso, L. Cavalleri, D. Beaini, P. Liò, and P. Veličković (2020) Principal neighbourhood aggregation for graph nets. arXiv preprint arXiv:2004.05718. Cited by: §1.
  • N. V. Cuong, N. Ye, W. S. Lee, and H. L. Chieu (2014) Conditional random field with high-order dependencies for sequence labeling and segmentation. Journal of Machine Learning Research 15, pp. 981–1009. Cited by: §2.
  • A. Deac, Y. Huang, P. Veličković, P. Liò, and J. Tang (2019a) Drug-drug adverse effect prediction with graph co-attention. arXiv preprint arXiv:1905.00534. Cited by: §1.
  • A. Deac, P. Veličković, and P. Sormanni (2019b) Attentive cross-modal paratope prediction. Journal of Computational Biology 26 (6), pp. 536–545. Cited by: §2, §3.2.
  • N. Djuric, V. Radosavljevic, Z. Obradovic, and S. Vucetic (2015) Gaussian conditional random fields for aggregation of operational aerosol retrievals. IEEE Geoscience and Remote Sensing Letters 12 (4), pp. 761–765. Cited by: §1.
  • N. Djuric, V. Radosavljevic, V. Coric, and S. Vucetic (2011) Travel speed forecasting by means of continuous conditional random fields. Transportation Research Record: Journal of the Transportation Research Board 2263, pp. 131–139. External Links: Document Cited by: §1.
  • H. Gao, J. Pei, and H. Huang (2019) Conditional random field enhanced graph convolutional neural networks. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 276–284. Cited by: §2.
  • J. Gilmer, S. S. Schoenholz, P. F. Riley, O. Vinyals, and G. E. Dahl (2017) Neural message passing for quantum chemistry. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1263–1272. Cited by: §1.
  • V. Gligorijevic, P. D. Renfrew, T. Kosciolek, J. K. Leman, K. Cho, T. Vatanen, D. Berenberg, B. C. Taylor, I. M. Fisk, R. J. Xavier, et al. (2019) Structure-based function prediction using graph convolutional networks. bioRxiv, pp. 786236. Cited by: §2.
  • W. Hamilton, Z. Ying, and J. Leskovec (2017a) Inductive representation learning on large graphs. In Advances in neural information processing systems, pp. 1024–1034. Cited by: §2, §2.
  • W. L. Hamilton, R. Ying, and J. Leskovec (2017b) Representation learning on graphs: methods and applications. arXiv preprint arXiv:1709.05584. Cited by: §1.
  • N. Kalchbrenner, L. Espeholt, K. Simonyan, A. v. d. Oord, A. Graves, and K. Kavukcuoglu (2016) Neural machine translation in linear time. arXiv preprint arXiv:1610.10099. Cited by: §2.
  • D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §3.2.
  • T. N. Kipf and M. Welling (2016) Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907. Cited by: §1, §2.
  • P. Krähenbühl and V. Koltun (2011) Efficient inference in fully connected crfs with gaussian edge potentials. In NIPS, Cited by: §2.
  • J. Lafferty, A. Mccallum, and F. Pereira (2001) Conditional random fields: probabilistic models for segmenting and labeling sequence data. pp. 282–289. Cited by: §2.
  • L. Liu, T. Zhou, G. Long, J. Jiang, and C. Zhang (2019) Learning to propagate for graph meta-learning. In Advances in Neural Information Processing Systems, pp. 1037–1048. Cited by: §2.
  • T. Ma, C. Xiao, J. Shang, and J. Sun (2018) CGNF: conditional graph neural fields. Cited by: §2.
  • A. v. d. Oord, S. Dieleman, H. Zen, K. Simonyan, O. Vinyals, A. Graves, N. Kalchbrenner, A. Senior, and K. Kavukcuoglu (2016) Wavenet: a generative model for raw audio. arXiv preprint arXiv:1609.03499. Cited by: §2.
  • V. Radosavljevic, S. Vucetic, and Z. Obradovic (2010) Continuous conditional random fields for regression in remote sensing. Vol. 215, pp. 809–814. External Links: Document Cited by: §1.
  • F. Scarselli, M. Gori, A. C. Tsoi, M. Hagenbuchner, and G. Monfardini (2008) The graph neural network model. IEEE Transactions on Neural Networks 20 (1), pp. 61–80. Cited by: §2.
  • J. Snell, K. Swersky, and R. Zemel (2017) Prototypical networks for few-shot learning. In Advances in neural information processing systems, pp. 4077–4087. Cited by: §2.
  • J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller (2014) Striving for simplicity: the all convolutional net. arXiv preprint arXiv:1412.6806. Cited by: §2.
  • P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Lio, and Y. Bengio (2017) Graph attention networks. arXiv preprint arXiv:1710.10903. Cited by: §1, §2.
  • P. Veličković, R. Ying, M. Padovano, R. Hadsell, and C. Blundell (2019) Neural execution of graph algorithms. arXiv preprint arXiv:1910.10593. Cited by: §2.
  • K. Xu, W. Hu, J. Leskovec, and S. Jegelka (2018) How powerful are graph neural networks?. arXiv preprint arXiv:1810.00826. Cited by: §2.
  • N. Zhou, Y. Jiang, T. R. Bergquist, A. J. Lee, B. Z. Kacsoh, A. W. Crocker, K. A. Lewis, G. Georghiou, H. N. Nguyen, M. N. Hamid, et al. (2019) The cafa challenge reports improved protein function prediction and new functional annotations for hundreds of genes through experimental screens. Genome biology 20 (1), pp. 1–23. Cited by: §3.1, §3.2, §3.3.
  • M. Zitnik, M. Agrawal, and J. Leskovec (2018) Modeling polypharmacy side effects with graph convolutional networks. Bioinformatics 34 (13), pp. i457–i466. Cited by: §1.
  • M. Zitnik and J. Leskovec (2017) Predicting multicellular function through multi-layer tissue networks. Bioinformatics 33 (14), pp. i190–i198. Cited by: §2.