Ranking-based Convolutional Neural Network Models for Peptide-MHC Binding Prediction

12/04/2020 ∙ by Ziqi Chen, et al. ∙ 0

T-cell receptors can recognize foreign peptides bound to major histocompatibility complex (MHC) class-I proteins, and thus trigger the adaptive immune response. Therefore, identifying peptides that can bind to MHC class-I molecules plays a vital role in the design of peptide vaccines. Many computational methods, for example, the state-of-the-art allele-specific method MHCflurry, have been developed to predict the binding affinities between peptides and MHC molecules. In this manuscript, we develop two allele-specific Convolutional Neural Network (CNN)-based methods named ConvM and SpConvM to tackle the binding prediction problem. Specifically, we formulate the problem as to optimize the rankings of peptide-MHC bindings via ranking-based learning objectives. Such optimization is more robust and tolerant to the measurement inaccuracy of binding affinities, and therefore enables more accurate prioritization of binding peptides. In addition, we develop a new position encoding method in ConvM and SpConvM to better identify the most important amino acids for the binding events. Our experimental results demonstrate that our models significantly outperform the state-of-the-art methods including MHCflurry with an average percentage improvement of 6.70 ROC5 across 128 alleles.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Keywords:

deep learning, prioritization, peptide vaccine design, convolutional neural networks, attention

2 Introduction

Immunotherapy, an important treatment of cancers, treats the disease by boosting patients’ immune systems to kill cancer cells Waldman2020; Esfahani2020; Couzin-Frankel1432; Mellman2011. To trigger patients’ adaptive immune responses, Cytotoxic T cells, also known as CD8+ T-cells, have to recognize peptides presented on the cancer cell surface Blum2013; valitutti1995serial. These peptides are fragments derived from self-proteins or pathogens by proteasomal proteolysis within the cell. To have the peptides presented on the cell surface to be recognized by CB8 receptors, they need to be brought from inside the cells to the cell surface, typically through binding with and transported by major histocompatibility complex (MHC) class-I molecules. To mimic natural occurring proteins from pathogens, synthetic peptide vaccines are developed for therapeutic purposes Purcell2007. Therefore, to design successful peptide vaccines, it is critical to identify and study peptides that can bind with MHC class-I molecules.

Many computational methods have been developed to predict the binding affinities between peptides and MHC class-I molecules Donnell2018; Han2017. These existing computational methods can be categorized into two types: allele-specific methods and pan methods. Allele-specific methods train one model for one allele such that the model can capture binding patterns specific to the allele, and thus it is better customized to that allele Lundegaard2008; Donnell2018. Pan methods train one model for all the alleles at a same time, and thus the information across different alleles can be shared and integrated into a general model Jurtz2017; Hu2018. These existing methods can achieve significant performance on the prediction of binding affinities. However, most existing methods formulate the prediction problem as to predict the exact binding affinity values (e.g., IC values) via regression. Such formulations may suffer from two potential issues. First of all, they tend to be sensitive to the measurement errors when the measured IC values are not accurate. In addition, many of these methods use ranking-based measurement such as Kendall’s Tau correlations to measure the performance of regression-based methods Bhattacharya2017; Odonnell2020. This could lead to sub-optimal solution as small regression errors do not necessarily correlate to large Kendall’s Tau. Therefore, these methods are limited in their capability of prioritizing the most possible peptide-MHC pairs of high binding affinities.

In this study, we formulate the problem as to prioritize the most possible peptide-MHC binding pairs via ranking based learning. We propose three ranking-based learning objectives such that through optimizing these objectives, we impose peptide-MHC pairs of high binding affinities ranked higher than those of low binding affinities. Coupled with these objectives, we develop two allele-specific Convolutional Neural Network (CNN)-based methods with attention mechanism, denoted as and . extracts local features of peptide sequences using 1D convolutional layers, and learns the importance of different positions in peptides using self-attention mechanism. In addition to the local features used in , represents the peptide sequences at different granularity levels by leveraging both global and local features of peptide sequences. We also develop a new position encoding method together with self-attention mechanism so as to differentiate amino acids at different positions. We compare the various combinations of model architectures and objective functions of our methods with the state-of-the-art baseline Donnell2018 on IEDB datasets Vita2018. Our experimental results demonstrate that our models significantly outperform the state-of-the-art methods with an average percentage improvement of 6.70% on AUC and 17.10% on ROC5 across 128 alleles.

We summarize our contributions below:

  • We formulate the problem as to optimize the rankings of peptide-MHC pairs instead of predicting the exact binding affinity values. Our experimental results demonstrate that our ranking-based learning is able to significantly improve the performance of identifying the most possible peptide-MHC binding pairs.

  • We develop two allele-specific methods and with position encoding and self attention, which enable a better learning of the importance of amino acids at different positions in determining peptide-MHC binding.

  • We incorporate both global and local features in to better capture and learn from different granularities of peptide sequence information.

  • Our methods outperform the state-of-the-art baseline on IEDB datasets Donnell2018 in prioritizing the most possible peptide-MHC binding pairs.

3 Literature Review

The existing computational methods for peptide-MHC binding prediction can be generally classified into two categories: linear regression-based methods and deep learning (DL)-based methods. Below, we present a literature review for each of the categories, including the key ideas and the representative work.

3.1 Peptide Binding Prediction via Linear Regression

Many early developed methods on peptide-MHC binding prediction are based on linear regression. For example, Peters et al. Peters2005 proposed a method named Stabilized Matrix Method (

), which applied linear regression to predict the binding affinities from one-hot encoded vector representation of peptide sequences. Kim

et al. Kim2009 derived a novel amino acid similarity matrix named Peptide:MHC Binding Energy Covariance (PMBEC) matrix and incorporated it into the approach to improve the performance of . In PMBEC, each amino acid is represented by its covariance of relative binding energy contributions with all other amino acids. Some recent work Bonsack2019; Zhao2018 demonstrates these linear regression-based methods are inferior to DL-based methods, and therefore, in our work, we focus on DL-based methods.

3.2 Peptide Binding Prediction via Deep Learning

The DL-based models can be categorized into allele-specific methods and pan methods. Allele-specific methods train a model for each allele and learn the binding patterns of each allele separately. Instead, pan methods train a model for all alleles to learn all the binding patterns together within one model. Both the methods use similar encoding methods such encoding, encoding and  Goldberg2014.

3.2.1 Allele-Specific Deep Learning Methods

Among these allele-specific methods, Lundegaard et al. Lundegaard2008 proposed that takes the embeddings of peptide sequences as input, and they applied neural networks with one hidden layer to predict peptide-MHC binding for peptides of fixed length. In , the hidden layer is a fully-connected (FC) layer, and learns the global features of peptide sequences such as the position and types of specific amino acids. Andreatta et al. Andreatta2015 extended to

by padding so that the model can handle peptides of variable length. Kuksa

et al. Kuksa2015 developed two nonlinear high-order methods including high-order neural networks (

) pre-trained with high-order semi-restricted Boltzmann machine (RBM), and high-order kernel support vector machines (

). Both the high-order RBMs and the high-order kernel are designed to capture the direct strong high-order interactions between features. Bhattacharya et al. Bhattacharya2017

developed a deep recurrent neural network based on gated recurrent units (GRUs) to capture the sequential features from peptides of various length. Vang

et al. Vang2017 applied two layers of 1D convolution on the embeddings of peptide sequences so as to learn local binding patterns existing in each -mer amino acids. O’Donnell et al. Donnell2018 designed a deep model named with locally-connected layers. This locally-connected layer is used to learn the position-specific local features from the peptide sequences. MHCflurry has been demonstrated to achieve better or similar performance compared with most of The other prediction methods Boehm2019; Zhao2018.

3.2.2 Pan Deep Learning Methods

Nielsen et al. Nielsen2016 developed a DL-based pan method named NetMHCpan3.0. This method takes the embedding of pseudo MHC sequences and peptide sequences as input, and then applies an ensemble of neural networks to predict the binding affinities of peptide-MHC pairs. Jurtz et al. Jurtz2017 extended NetMHCpan3.0 to NetMHCpan4.0 by training the model on both binding affinity data and eluted ligand data. Their model shares a hidden layer among two kinds of data and applies two different output layers to predict binding affinities and eluted ligands, respectively, for peptide-MHC pairs. Phloyphisut et al. Phloyphisut2019

developed a deep learning model, which uses GRUs to learn the embeddings of peptides, and a FC layer to learn the embeddings of alleles. The two types of embeddings are then concatenated to predict peptide-MHC binding probabilities. Han

et al. Han2017 encoded peptide-MHC pairs into image-like array (ILA) data and applied deep 2D convolutional neural networks to extract the possible peptide-MHC interactions from the ILA data. Hu et al. Hu2018 combined a deep convolutional neural network with an attention module. They applied multiple convolutional layers to extract features of different levels. The extracted features are integrated with the features learned from attention mechanism and fed into the output layer to predict binding affinities of peptide-MHC pairs. O’Donnel et al. Odonnell2020 developed a pan-allele binding affinity predictor BP and an allele-independent antigen presentation predictor AP to calculate the presentation scores of peptide-MHC pairs. Their binding affinity predictor includes upstream and downstream residues of peptides from their source proteins to improve the performance of models. Note that is a pan method and requires source proteins of peptides. Therefore, we do not compare our methods with .

4 Materials

4.1 Peptide-MHC Binding Data

The dataset is collected from the Immune Epitope Database (IEDB) Vita2018. Each peptide-MHC entry in the dataset measures the binding affinity between a peptide and an allele. These binding affinity entries could be of either quantitative values (e.g., ) or qualitative levels indicating levels of binding strength. The mapping between quantitative values and qualitative levels is shown in Table 1. Note that higher values indicate lower binding affinities.

  qualitative quantitative level
  negative >5,000nM 1
  positive-low 1,000-5,000nM 2
  positive-intermediate 500-1,000nM 3
  positive 100-500nM 4
  positive-high 0-100nM 5
Table 1: Binding Affinity Measurement Mapping

We combined the widely used IEDB benchmark dataset curated by Kim et al. Kim2014 and the latest data added to IEDB (downloaded from the IEDB website on Jun. 24, 2019). The benchmark dataset contains two datasets BD2009 and BD2013 compiled in 2009 and 2013, respectively. BD2009 consists of 137,654 entries, and BD2013 consists of 179,692 entries. The latest dataset consists of 189,063 peptide-MHC entries. Specifically, we excluded those entries with non-specific, mutant or unparseable allele names such as HLA-A2. We then combined the datasets by processing the duplicated entries and entries with conflicting affinities as follows. We first mapped the quantitative values of all these duplicated or conflicting entries into qualitative levels based on Table 1, and used majority voting to identify the major binding level of the peptide-MHC pairs. If such binding levels cannot be identified, we simply removed all the conflicting entires; otherwise, we assigned the average quantitative values in the identified major binding level to the peptide-MHC pairs. The combined dataset consists of 202,510 entries across 128 alleles and 53,253 peptides as in Table 2. We further normalized the binding affinity values ranging from 0 to to 0, 1 via formula

(1)

where is the measured binding affinity value, and represents that is clamped into range . By using the above clamp function, smaller/larger binding affinity values corresponding to higher/lower binding affinities will be converted to higher/lower normalized values.

   variables count
   entries 202,510
   alleles 128
   peptides 53,253
Table 2: Data Statistics

5 Definitions and Notations

All the key definitions and notations are listed in Table 3.

notation meaning
 peptides & alleles a peptide
an amino acid of a peptide sequence
a set of peptides
an allele
 binding / original/normalized binding affinity for a peptide-MHC pair
binding level for a peptide-MHC pair
 embeddings encoding vector of amino acid type
embedding vector of each amino acid
position embedding of each -mer amino acids
feature matrix for a peptide sequence
feature matrix for a padded peptide sequence
(i.e., input of global kernel in )
 parameters a scoring function
dimension of amino acid embedding
number of filters in convolution layer
dimension of position embedding
number of global kernels in
dimension of hidden units in fully connected layer
kernel size in convolutional neural layer
attention value learned in attention layer
Table 3: Notations

6 Methods

We developed two new models: and (will be discussed in Section 6.1 and Section 6.2), and compare them with Donnell2018, where is the state-of-the-art and used as the baseline. In terms of the embeddings of amino acids, we compare the performance of

with three embedding methods for amino acids and their combinations. In terms of the loss functions, we developed three pair-wise hinge loss functions, and compare them with the conventional mean-square loss function used in

.

6.1 Convolutional Neural Networks with Attention Layers ()

In this section, we introduce our new model , a convolutional neural network with attention layers. Figure 1 presents the architecture of .

6.1.1 Peptide Representation in

In , we first represent each amino acid, denoted as , in a peptide sequence, denoted as (started from C-end), using two types of information. The first information encodes the type of amino acids using BLOSUM62 matrix or encoding. The details about the encoding of amino acids are described in Section 7.1.1. The second information encodes the position of each amino acid in peptide sequences, as it has been demonstrated Donnell2018 that different positions of peptides contribute differently to their binding to alleles. In particular, each position of the peptide sequences, regardless of which amino acid is at the position, will have two position vectors: for the -th position from the C-end, we use and to represent the position information with respect to the C-end and the N-end, respectively. The two position vectors will together accommodate the variation of peptide lengths. Thus, each amino acid is represented as a feature vector , where is ’s embedding with an encoding method in Section 7.1.1, and is the dimension of position embedding vectors; a peptide of amino acids is represented as a feature matrix . The position vectors will be learned in order to optimize the peptide representations.

Figure 1: Architectures of and

6.1.2 Model Architecture of

The model consists of a 1D convolutional layer, a self-attention layer and a fully-connected layer as demonstrated in Figure 1. The 1D convolutional layer takes the peptide feature matrix as input, and extracts local feature patterns from the peptide sequences via 1D convolution using kernels of size . The output of the 1D convolutional layer is an embedding matrix , in which each column represents the embedding of the -th -mer out of the

kernels. Batch normalization is applied to fix the mean and variance of the embedding matrix

in each batch. After batch normalization, rectified linear unit (ReLU) activation is applied as the non-linear activation function on the embedding matrix. Then, we apply the self-attention mechanism

Chorowski2015 to convert the embedding matrix into an embedding vector for the input peptide as follows. First, the weight for -th -mer is calculated as follows,

(2)

where , and are the parameters of self-attention layer, and is the number of hidden units in the self-attention layer. With the weight on each -mer, the embedding of the whole sequence is calculated as the weighted sum of all -mer embeddings, that is,

(3)

The embedding vector of the input peptide is then fed into the fully-connected layer to predict peptide binding at the output layer. We will discuss the loss function used at the output layer later in Section 6.3.

6.2 Convolutional Neural Networks with Global Kernels and Attention Layers ()

We further develop into a new model with global kernels, denoted as as in Figure 1. The use of global kernels is inspired by Bhattacharya et al. Bhattacharya2017, which demonstrates that global kernels within CNN models can significantly improve the performance of peptide binding prediction. As primarily extracts and utilizes local features, the additional global kernels extract global features from the entire peptide sequences that could be useful for prediction but cannot be captured by local convolution. In order to use global kernels, we pad the peptide sequences of various lengths to length 15, with padding 0 vectors in the middle of the peptide representations in a same way as in . More details about padding are available later in Section 7.1.2. The padded peptide sequences will be encoded into a feature matrix in the same way as in , except that the position embeddings are not included because the global kernels will overwrite the local information after the convolution.

Given the input , the convolution using global kernels will generate a vector . We concatenate and as in (i.e., the embedding vector calculated from local kernels) to construct a local-global embedding vector for the input peptide sequence and feed into the fully-connected layer to predict peptide prediction as in Figure 1.

6.3 Loss Functions

We propose three pair-wise hinge loss functions, denoted as , and , respectively. We will compare these loss functions with the widely used mean-square loss function Donnell2018, denoted as MS, in learning peptide bindings.

6.3.1 Hinge Loss Functions for Peptide Binding Ranking

We first evaluate the hinge loss as the loss function in conjunction with various model architectures. The use of hinge loss is inspired by the following observation. We noticed that in literature, peptide-MHC binding prediction is often formulated into either a regression problem, in which the binding affinities between peptides and alleles are predicted, or a classification problem, in which whether the peptide will bind to the allele is the target to predict. However, in practice, it is also important to first prioritize the most promising peptides with acceptable binding affinities for further assessment, whereas regression and classification are not optimal for prioritization. Besides, recent work has already employed several evaluation metrics on top ranked peptides, for example,

Zeng2019 evaluated the performance through the true positive rate at 5% false positive rate, which suggests the importance of top-ranked peptides in addition to accurate affinity prediction. All of these inspire us to consider ranking based formulation for peptide prioritization.

Given two normalized binding affinity values and of any two peptides and with respect to an allele, the allele-specific pair-wise ranking problem can be considered as to learn a scoring function , such that

(4)

Please note that is a score for peptide , which is not necessarily close to the binding affinity

, as long as it reconstructs the ranking structures among all peptides. This allows the ranking based formulation more flexibility to identify the most promising peptides without accurately estimating their binding affinities. To learn such scoring functions, hinge loss is widely used, and thus we develop three hinge loss functions to emphasize different aspects during peptide ranking.

Value-based Hinge Loss Function

The first hinge loss function, denoted as , aims to well rank peptides with significantly different binding affinities. Given two peptides and , this hinge loss function is defined as follows:

(5)

where denotes the binding level of peptide according to the Table 1; denotes that the binding level of peptide is higher than the peptide ; and are the ground-truth normalized binding affinities of and , respectively; is a pre-specified constant to increase the difference between two predicted scores. learns from two peptides of different binding levels and defines a margin value between two peptides as the difference of their ground-truth binding affinities plus a constant . If two peptides and are on different binding levels , and the difference of their predicted scores is smaller than the margin , this pair of peptides will contribute to the overall loss; otherwise, the loss of this pair will be 0. Note that is only defined on peptides of different binding levels. For the peptides with the same or similar binding affinities, allows incorrect ranking among them.

Level-based Hinge Loss Function

Instead of ranking with respect to the margin as in , we relax the ranking criterion and use a margin according to the difference of binding levels (Table 1). Thus, the second hinge loss, denoted as , is defined as follows:

(6)

where is a constant. Given a pair of peptides in two different binding levels, similar to , requires that if the difference of their predicted scores is smaller than a margin, this pair of peptides will contribute to the overall loss; otherwise, the loss of these two peptides will be 0. However, unlike , the margin defined in depends on the difference of binding levels between two peptides (i.e., ). Therefore, in , the margin values of all the peptides on the level to any other peptides on the level will be the same (i.e., ). Note that is defined on peptides of different binding levels, and thus also allows incorrect ranking among peptides of same binding levels as in ; the difference with is on how the margin is calculated.

Constrained Level-based Hinge Loss Function

The third hinge loss function extends by adding a constraint that two peptides of a same binding level can have similar predicted scores. This hinge loss is defined as follows:

(7)

Given a pair of peptides on a same binding level, the added constraint (the case if ) requires that if the absolute difference is smaller than the pre-specified margin , the loss will be zero; otherwise, this pair will have non-zero loss. The constraint on the absolute difference allows incorrect ranking among peptides on a same binding level as long as their predicted scores are similar.

6.3.2 Mean-Squares Loss

We also compare a mean-squares loss function, denoted as MS, proposed in Donnell2018; Paul2019, to fit the entries without exact binding affinity values as below:

(8)

where “" denotes that the peptide-MHC entry is associated with an exact binding affinity value . In this case, the MS loss is calculated as the squared difference between the predicted score and ( is normalized from as in Equation 1). In Equation 8, “ is qualitative" denotes that is associated with a binding level instead of a binding affinity value (Table 1). In this case, is normalized from the binding level thresholds (i.e., in calculating in Equation 1). When qualitative has , that is, the peptide does not bind to the allele and the binding affinity is low (i.e., large binding affinity value), the predicted score should be small enough compared to in order not to increase the loss. When quantitative has , that is, the peptide binds to the allele with reasonably high binding affinity (i.e., small binding affinity value), the predicted score should be large enough compared to in order not to increase the loss.

Note that in MS, the predicted score needs to be normalized into range . This is because is in range (Equation 1) so that needs to be in the same range and thus neither nor will dominate the squared errors due to substantially large or small values. However, in the three hinge loss functions (Equation 5, 6 and 7), the potential different range between and or could be accommodated by the constant (Equation 5) or (Equation 6 and 6

), respectively. In MS, we use sigmoid function to normalize

.

7 Experimental Settings

7.1 Baseline methods

7.1.1 Encoding Methods

Encoding methods represent each amino acid with a vector. Popular encoding methods used by the previous works include encoding Donnell2018; Jurtz2017; Nielsen2016, encoding Phloyphisut2019; Bhattacharya2017 and embedding method Vang2017. encoding utilizes the BLOSUM62 matrix Henikoff1992, which measures the evolutionary divergence information among amino acids. We use the -th row of the matrix as the feature of -th amino acid. encoding represents the -th natural amino acid with an one-hot vector, in which all elements are ‘0’ except the -th position as ‘1’. learns the embeddings of amino acids from their contexts in protein sequences or peptide sequences. This embedding method requires learning on a large corpus of amino acid sequences, and is much more complicated than . However, it is demonstrated Phloyphisut2019 that embedding method is comparable to encoding method, and therefore, we use encoding and encoding, but not . Besides the above encoding methods, we also evaluate another deep encoding method, denoted as , in which the encoding of each amino acid is learned during the training process. encoding is not deterministic and is learned during the training process; the dimension of embedding vector needs to be specified as a predefined hyper-parameter. We also combine different representations of amino acid generated by the above three encoding methods. These combinations include +, +, + and ++, where “+" represents concatenation of the embeddings of amino acid from different encoding methods.

7.1.2 Baseline Method - Local Connected Neural Networks

Donnell2018 is a state-of-the-art deep model with locally-connected layers for peptide binding prediction. In , all peptides of length 8 to 15 are padded into length 15 by keeping the first and last four residues and inserting the padding elements in the middle (e.g., "GGFVPNMLSV" is padded to "GGFVXXPNXXXMLSV"). The padded sequences are encoded into a feature matrix using encoding method. employs locally-connected layers to extract local feature patterns for each -mers in peptide sequences. Unlike CNN using common filters across all -mer residues in peptides, locally-connected layers apply local filters for each -mer to encode the position-specific features. The encoded feature embeddings for all -mers are then concatenated into a vector, and fed into the fully-connected layer for binding prediction. To the best of our knowledge, is one of the best neural network model for allele-specific peptide binding prediction problem.

7.2 Batch Generation

For models with MS as the loss function, we randomly sample a batch of peptides as the training batch. For models with the proposed pair-wise hinge loss functions (, , ), to reduce computational costs, we construct pairs of peptides for each training batch from a sampled batch of peptides. Specifically, for and , each pair consists of two peptides from different binding levels; and for , the constructed pairs can consist of two peptides from the same or different binding levels.

7.3 Model Training

We use 5-fold cross validation to tune the hyper-parameters of all methods through a grid search approach. For each allele, we run the grid search algorithm to find the optimal hyper-parameters for the allele-specific model. We apply stochastic gradient descent (SGD) to optimize the loss functions. We use 10% of the training set as a validation set. This validation set is applied to adjust the learning rate dynamically and determine the early stopping of training process. We set the dimension of

encoding method as 20, which is equal to the dimension of and encoding method. We also set both the constant in and the constant in and as .

7.4 Evaluation Metrics

We use 4 types of evaluation metrics, including average rank (), hit rate (), area under the roc curve (), and , to evaluate the performance of the various model architectures and loss functions. Both and metrics are employed to measure the effectiveness of our model on the prioritization of promising peptides. Specifically, metric measures the average overall rankings of promising peptides; metric measures the ratio of promising peptides ranked at top; metric measures the possibility that positive peptides are ranked higher than negative peptides; metric measures the ratio of positive peptides that are prioritized higher than top- false positive peptides. We denote as the rank of peptide based on their predicted scores, as the set of peptides with binding affinities smaller than (e.g., ). Then (e.g., ) is defined as follows,

(9)

where is the size of . Smaller values of indicate that promising peptides are ranked higher in general, and thus better model performance.

The hit rate (e.g., ) is defined as follows,

(10)

where denotes the set of peptides with predicted scores ranked at top . Larger values of indicate that more promising peptides are prioritized to top- by the model, and thus better performance.

We use as the threshold to distinguish positive peptides and negative peptides, and apply two metrics for classification to evaluate the model performance. The first classification metric is calculated as below,

(11)

where is the set of all peptides, and is the number of peptides in the dataset; is the set of all positive peptides, and is the number of positive peptides; is an indicator function ( if is true, otherwise 0). Larger values of indicate that positive peptides are more likely to be ranked higher than negative peptides. (e.g., ) score is the area under the roc curve up to false positives. is calculated as below.

(12)

Larger values of indicate that the model can prioritize more positive peptides up to first false positive peptides. We use 7 metrics constructed from the above 4 types of metrics to evaluate the model performance. These 7 metrics include , , , , , and .

In order to compare the models with respect to one single metric in a holistic way, we define a hybrid metric by combining all the evaluation metrics . Given a model trained with a set of hyper-parameters , we denote its performance on metric “” ( =, , , ) as , and the best metric value as . Then, the hybrid metric for a model with hyper-parameters is defined as below,

(13)

where is an identity function: if smaller values on metric indicate better performance; otherwise. For metrics and , a smaller value represents a better model performance.

8 Experimental Results

We present the experimental results in this section. All the parameters used in the experiments are reported in the Appendix.

8.1 Model Architecture Comparison

We evaluate all the 12 possible combinations of the 3 model architectures (, , ) and the 4 loss functions (, , , MS) with all the encoding methods through 5-fold cross validation. Table 4 presents the overall performance comparison with ++ encoding method (encoding method comparison will be presented later in Section 8.3

). We apply the grid search to determine the optimal hyperparameters of each method on each allele with respect to the hybrid metric

(Appendix Section A1.1), and report the best performance in Table 4. We use with MS loss in Donnell2018 as the baseline, and calculate the percentage improvement of our methods over the baseline across 128 alleles. In Table 4, the best model for each allele is selected with respect to ; the model performance is further evaluated using the 7 evaluation metrics.

 Model loss
7.93 4.71 2.80 5.48 5.13 8.43 7.26
5.63 5.47 1.66 3.59 4.56 7.11 4.65
6.35 5.70 0.99 2.59 4.16 4.69 4.42
MS -6.26 0.02 -7.87 -3.98 0.16 -3.34 -3.94
11.58 10.47 7.28 8.28 6.70 17.10 14.42
8.97 8.64 6.57 7.36 6.04 12.89 10.85
10.01 8.87 4.73 6.00 6.00 14.01 11.36
MS 8.66 8.14 2.77 4.28 3.93 13.54 9.68
11.06 8.93 5.60 5.20 4.42 11.10 9.51
9.45 5.77 5.09 4.43 4.72 8.05 6.95
8.83 6.35 4.54 5.73 4.52 7.10 5.88
MS 0.00 0.00 0.00 0.00 0.00 0.00 0.00
  • The values in the table are percentage improvement compared with the baseline with MS. Models are trained using ++ encoding methods, and selected with respect to and evaluated using the 7 evaluation metrics. The best improvement with respect to each metric is bold.

Table 4: Overall Performance Comparison (; ++)

Table 4 shows that as for the model architectures, on average, achieves the best performance overall among all three model architectures (e.g., with MS has 8.66% improvement in and 8.14% in over with MS). Please note that when we calculate the improvement, we exclude alleles on which our models achieve more than 150% improvement (typically no more than 15 such alleles under different metrics). This is to remove potential bias due to a few alleles on which the improvement is extremely substantial. performs better than on average. extends with global kernels to extract global features from the entire peptide sequences. The better performance of than that of indicates that global features could capture useful information from entire peptide sequences, which are typically short, for binding prediction. In addition, outperforms on average. The difference between and is that learns position-specific features via position-specific kernels, and learns local features via kernels that are common to all the locations. As demonstrated in other studies Donnell2018 that certain positions of peptides are more critical for their binding to alleles, the better performance of over could be attributed to its position-specific feature learning capablity. Moreover, since the peptide sequences are usually short (8-15 amino-acid long), it is very likely that these short sequences do not have strong local patterns, and thus could not capture a lot of useful local information. In comparison with , integrates both local features via its component and global features via global kernels. Such integration could enable to capture global information as compensation to local features, and thus to improve model performance.

In addition to using the hybrid metric to determine the optimal hyperparameters, we also apply another four metrics , , and to select the hyperparameters. The results of the best models in terms of these four metrics are presented in Table A1  A2  A3 and  A4 in the Appendix, respectively. The results show the same trend as that in Table 4, that is, on average, outperforms and on all 7 metrics and loss function is the best among all loss functions.

Figure 2: Performance Improvement among All Alleles
Figure 3: Performance Improvement among All Alleles
Figure 4: Performance Improvement among All Alleles

Figure 2, 3 and 4 show the distributions of performance improvement among all the alleles from , and with , in comparison with with MS, respectively. All the methods use ++ as encoding methods, and the performance is evaluated using in a same way as that in Table 4. Figure 2 shows that in , about half of the alleles have performance improvement compared to that in with MS. Overall, there is an average 4.71% improvement among all the alleles. Figure 3 shows that more alleles have performance improvement in compared to that in and in with MS, and more alleles have significant improvement. This indicates the strong performance of . Figure 4 shows that in comparison with MS as the loss function, has more improvement using as the loss function (average improvement 8.93%).

8.2 Loss Function Comparison

Table 4 also demonstrates that is the most effective loss function in combination with each of the learning architectures, and all three hinge loss functions , and can outperform the MS loss function. For example, for , the performance improvement of , , and MS in terms of follows the order , compared with the baseline with MS. This trend is also consistent for and . The better performance of may be due to the use of a margin in the loss function that is determined by the binding affinity values (Equation 5). This value-based margin could enforce granular ranking among peptides even when they are from a same binding level. In (Equation 6) and (Equation 7), the margins are determined based on the levels of the binding affinities. While and can still produce ranking structures of peptides according to their binding levels, they may fall short to differentiate peptides of a same binding level.

All the three hinge loss functions , and outperform MS across all the model architectures. This might be due to two reasons. First, the pairwise hinge loss functions are less sensitive to the imbalance of different amounts of peptides, either strongly binding or weakly/non-binding, by sampling and constructing pairs from respective peptides. Thus, the learning is not biased by one type of peptides, and the models can better learn the difference among different types of peptides, and accordingly produce better ranking orders of peptides. Second, the pairwise hinge loss functions can tolerate insignificant measurement errors to some extent. All the three hinge loss functions do not consider pairs of peptides with similar binding affinities. This enables our models to be more robust and tolerant to noisy data due to the measurement inaccuracy of binding affinities.

8.3 Encoding Method Comparison

We evaluate three encoding methods (, , ) and their combinations (+, +, +, ++) over with loss (the best loss function overall) using through 5-fold cross validation. We report the results of best models across all 128 alleles in the same way as in Section 8.1 (i.e., model selection with respect to , evaluated using the 7 metrics). Table 5 presents the average percentage improvement of the 7 encoding methods over the baseline on the 7 metrics. The reported results in Table 5 are from the models with the optimal hyperparameters that are selected according to the hybrid metric . Table 5 shows that ++ encoding method achieves the best performance in general. ++ encodes the amino acids using their inherent evolutionary information via and identity of different amino acids via , both of which are deterministic and not specific to the learning problem, and also the allele-specific information via , which is learned in the model and thus specific to the learning problem. The combination of deterministic, amino acid identities and learned features enables very rich information content in the embeddings, and could be the reason why it outperforms others. With a similar rationale, + achieves the second best performance in general. on its own outperforms and , respectively, indicating is rich in representing amino acid information. Combing with and , respectively, introduces notable improvement over alone, indicating that + and + are able to represent complementary information rather than that in . on itself alone performs the worst primarily due to its very limited information content. Combing with improves from but does not perform well compared to alone. This may be due to that (i.e., amino acid identity) information still plays a substantial role in + so information does not supply sufficient additional information.

 encoding
0.00 0.00 0.00 0.00 0.00 0.00 0.00
-6.92 -4.38 -4.97 -2.37 -0.73 -5.91 -4.63
-3.33 0.69 -2.76 -0.56 -0.15 -1.22 -1.63
+ -0.96 0.95 0.21 0.79 0.3 1.57 0.89
+ 1.37 1.79 0.69 1.49 0.61 3.39 2.49
+ -4.72 -1.65 -3.37 -1.26 -0.32 -3.25 -2.67
++ -0.36 0.11 1.17 2.12 0.69 4.58 3.46
  • The values in the table are percentage improvement compared with with using . Models are selected with respect to and evaluated using the 7 evaluation metrics. The best improvement with respect to each metric is bold.

Table 5: Encoding Performance Comparison on with using ()

We also select the optimal set of hyperparameters with respect to , , and , and report the corresponding results in Table A7, A6, A8 and A5 in Appendix, respectively. With different model selection metrics, the encoding methods have different performance. However, in general, ++ achieves better performance than other encoding methods over all the metrics.

8.4 Attention Weights

Figure 5 shows the attention weight of HLA-A-0201 data learned by the attention layer of (with ++, ). In Figure 5, each column represents the weight of 1-mer embedding, that is, the embedding over one amino acid, because the best kernel size for HLA-A-0201 data in is 1; each row represents an attention weight learned for a specific peptide by . Figure 5 shows the amino acids located at the second position and the last position contribute most to the binding events. This is consistent with the conserved motif calculated by SMM matrix Peters2005. This indicates that with the attention layer is able to accurately learn the importance of different positions in peptides in predicting peptide activities.

Figure 5: Attention Weights for HLA-A-0201 Learned from

We do not present the attention weights learned by , as in , the attention weights do not show binding patterns as clear as those in . This is due to that incorporates both local features and global features, and the global features might significantly contribute to the prediction and therefore the contribution from local features is reduced.

9 Conclusions

Our methods contribute to the study of peptide-MHC binding prediction problem in two ways. First, instead of predicting the exact binding affinities values as in the existing methods, we formulate the problem as to prioritize most possible peptide-MHC binding pairs via a ranking-based learning. We developed three pairwise ranking-based learning objectives for such prioritization, and the corresponding learning methods that impose the peptide-MHC pairs of higher binding affinities ranked above those with lower binding affinities with a certain margin. Our experimental results in comparison with the state-of-the-art regression based methods demonstrate the superior prediction performance of our methods in prioritizing and identifying the most likely binding peptides. In addition to the the learning objectives, we also developed two convolutional neural network-based model architectures and , which incorporate a new position encoding method and attention mechanism that differentiate the importance of amino acids at different positions in determining peptide-MHC binding. Our experiments show that the learned important positions and amino acids for allele HLA-A-0201 conform to the biological understanding of the allele. Our experimental results also demonstrate that our model architectures can achieve superior or at least comparable performance with the state-of-the-art allele-specific baseline .

10 Acknowledgments

This project was made possible, in part, by support from the National Science Foundation under Grant Number IIS-1855501 and IIS-1827472, and the National Institute of General Medical Sciences under Grant Number 2R01GM118470-05. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the funding agencies.

References

Appendix

Appendix A1 Additional Experimental Results

a1.1 Hyperparameter search space

The search space of hyperparameters includes batch size {32, 128}, number of layers {1, 2}, filter size {1, 3, 5}, number of filters {8, 16, 64}.

a1.2 Model Architecture Comparison

Table A1, A2, A3 and A4 present the average percentage improvement of our models over the baseline with MS. Models are trained with + + and selected with respect to 4 different metrics (i.e., , , and , respectively).

a1.3 Encoding Method Comparison

Table A5, A6, A7, and A8 present the average percentage improvement of the 7 encoding methods over the baseline encoding method . We used with as the model to evaluate encoding methods and their combination. We trained the models and selected the hyper-parameters with respect to 4 different metrics (i.e., , , and , respectively).

 Model loss
8.08 4.79 3.56 5.93 6.36 9.92 10.01
6.36 4.50 1.33 4.50 5.21 6.57 5.92
6.58 3.49 1.36 3.26 5.63 7.25 6.05
MS -5.17 -7.93 -6.69 -1.93 1.24 -3.51 -3.36
10.75 7.58 8.06 9.61 8.68 18.84 16.29
10.10 5.39 5.87 6.69 7.39 14.09 10.97
10.59 8.28 5.32 7.47 7.35 14.66 11.56
MS 7.36 7.54 3.27 5.74 5.16 15.66 11.68
10.95 6.74 6.62 7.95 6.45 15.87 11.45
8.61 7.32 5.37 6.08 5.71 10.75 9.39
8.20 4.19 4.87 5.87 4.92 11.79 7.99
MS 0.00 0.00 0.00 0.00 0.00 0.00 0.00
  • The values in the table are percentage improvement compared with the baseline with MS. Models are trained using ++ encoding methods, and selected with respect to and evaluated using the 7 evaluation metrics. The best improvement with respect to each metric is bold.

Table A1: Performance Comparison over with MS (; ++)
 Model loss
5.72 6.89 3.15 5.66 7.11 12.78 10.81
4.89 5.54 2.47 3.50 6.27 7.34 8.35
4.75 5.88 0.93 3.62 6.24 7.21 7.29
MS -8.57 0.21 -8.00 -1.23 1.65 -2.45 -2.47
9.86 8.85 7.89 8.59 9.12 19.29 18.11
8.18 7.64 6.46 6.64 8.44 15.59 14.08
7.35 8.5 5.02 6.41 7.96 13.21 14.04
MS 7.10 8.05 2.49 4.95 6.30 15.17 12.5
10.11 8.54 7.14 9.21 6.09 16.00 14.65
8.78 6.60 5.55 5.95 5.50 13.19 11.30
6.50 6.20 5.08 5.47 4.69 12.19 8.12
MS 0.00 0.00 0.00 0.00 0.00 0.00 0.00
  • The values in the table are percentage improvement compared with the baseline with MS. Models are trained using ++ encoding methods, and selected with respect to and evaluated using the 7 evaluation metrics. The best improvement with respect to each metric is bold.

Table A2: Performance Comparison over with MS (; ++)
 Model loss
6.83 2.28 5.03 7.34 3.44 9.69 8.55
3.00 3.12 3.88 6.88 3.16 5.41 5.81
3.66 -0.63 3.01 5.83 2.81 4.43 3.58
MS -9.12 -8.45 -5.46 -2.31 -0.52 -5.62 -5.45
10.27 4.42 9.24 10.08 5.09 17.20 15.06
7.18 2.84 8.12 8.31 4.42 12.36 10.63
7.54 2.98 7.69 7.25 4.44 11.57 10.04
MS 4.31 2.49 5.28 4.78 2.86 11.68 9.47
8.08 1.61 7.45 8.35 4.3 11.02 11.05
7.64 3.96 6.31 7.61 3.56 8.63 7.63
7.39 1.64 5.81 6.74 3.41 9.32 7.68
MS 0.00 0.00 0.00 0.00 0.00 0.00 0.00
  • The values in the table are percentage improvement compared with the baseline with MS. Models are trained using ++ encoding methods, and selected with respect to and evaluated using the 7 evaluation metrics. The best improvement with respect to each metric is bold.

Table A3: Performance Comparison over with MS (; +