λ-Scaled-Attention: A Novel Fast Attention Mechanism for Efficient Modeling of Protein Sequences

01/09/2022
by   Ashish Ranjan, et al.
0

Attention-based deep networks have been successfully applied on textual data in the field of NLP. However, their application on protein sequences poses additional challenges due to the weak semantics of the protein words, unlike the plain text words. These unexplored challenges faced by the standard attention technique include (i) vanishing attention score problem and (ii) high variations in the attention distribution. In this regard, we introduce a novel λ-scaled attention technique for fast and efficient modeling of the protein sequences that addresses both the above problems. This is used to develop the λ-scaled attention network and is evaluated for the task of protein function prediction implemented at the protein sub-sequence level. Experiments on the datasets for biological process (BP) and molecular function (MF) showed significant improvements in the F1 score values for the proposed λ-scaled attention technique over its counterpart approach based on the standard attention technique (+2.01 state-of-the-art ProtVecGen-Plus approach (+2.61 Further, fast convergence (converging in half the number of epochs) and efficient learning (in terms of very low difference between the training and validation losses) were also observed during the training process.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

11/04/2018

Deep Robust Framework for Protein Function Prediction using Variable-Length Protein Sequences

Amino acid sequence portrays most intrinsic form of a protein and expres...
12/06/2017

Attention based convolutional neural network for predicting RNA-protein binding sites

RNA-binding proteins (RBPs) play crucial roles in many biological proces...
01/07/2020

Knowledge-aware Attention Network for Protein-Protein Interaction Extraction

Protein-protein interaction (PPI) extraction from published scientific l...
09/30/2020

Learning Hard Retrieval Cross Attention for Transformer

The Transformer translation model that based on the multi-head attention...
07/05/2018

Feature Assisted bi-directional LSTM Model for Protein-Protein Interaction Identification from Biomedical Texts

Knowledge about protein-protein interactions is essential in understandi...
05/22/2020

Fast differentiable DNA and protein sequence optimization for molecular design

Designing DNA and protein sequences with improved or novel function has ...
08/18/2021

Towards Interpreting Zoonotic Potential of Betacoronavirus Sequences With Attention

Current methods for viral discovery target evolutionarily conserved prot...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Automatic functional characterization of proteins is one of the key challenges in computational biology. The proteins are fundamental to the biological interactions. They are found helpful in, among other things, studying the attack of viruses [1, 2, 3] and harmful bacteria [3] on the organism. Proteins, in their elementary form, are linear chains of amino acids called “protein sequences”, commonly also referred to as “the language of life” [4]. In the past few years, undoubtedly, protein sequences have become a cheap and reliable source for the protein studies. The surplus amount of protein sequences – generated due to the blooming of sequencing technologies [5]

– has marked a transitional shift toward the data-driven machine learning approaches

[6, 7] for the characterization of proteins.

The evolution of deep-learning has further taken the automated characterization process to the next level by significantly bringing down the effort needed to process protein sequences. In particular, the deep recurrent neural networks (DRNN), e.g., long-short term memory (LSTM) networks, have shown great potential with protein sequences for the function prediction task

[8, 9, 10]. In protein sequences, which are also represented as a string of n-mers [8, 11], the order of amino acids/n-mers has a significant effect in determining their function(s). These orderly arrangements are believed to create orderly dependencies/patterns proven useful in determining their function(s). DRNN helps capture such orderly dependencies between the amino acids/n-mers of a protein sequence.

However, it should be noted that protein words (either amino acid or n-mers) that make up a protein sequence have a different significance in uncovering the underlying functional context in the sequence. The set of key significant protein words, when combined, further provides a clear and strong indication of the functional context. The functional context refers to the “small pattern” common among the protein sequences of the same functional family. These are created due to the orderly dependencies between the sub-set of protein words making up a protein sequence, e.g., sequence motifs [12]. Thus, there are two important aspects to efficient processing of protein sequences: (i) learning the orderly dependencies between the protein words, and (ii) capturing the importance of individual protein words toward revealing the underlying functional context.

While LSTM can capture the orderly dependencies between the protein words, it has no mechanism to highlight the significance of words. This causes masking/losing the critical information (possibly, containing the functional context), resulting in a sub-optimal intermediate representation. Therefore, the attention mechanism [13] that builds an intermediate representation by prioritizing the key informative words, regardless of their position in the sentence, will be useful in such cases and has been explored in this work. Attention-based DRNN has already become a standard solution for a wide variety of tasks, such as document classification [14] and sentiment classification [15, 16]. They have a profound effect on the produced intermediate representations.

Howsoever, the application of attention technique on protein sequences has its own set of challenges. This is because, unlike the plain text words which have strong context-specific semantics, protein words (i.e., amino acid/n-mers) have a very weak context-specific semantics. Thus, while a small sub-set of words is sufficient to identify the context of a plain text sentence, possibly a large sub-set of protein words is needed to identify a functional context in the protein sequence. The large sub-set of protein words helps to make a strong case for the underlying functional context. The challenges faced by the attention technique are as follows:

  • Vanishing attention score problem: Attention [13] technique, in general, produces a distribution that helps identify the significance of words in a sentence. While for the plain text sentences, standard attention [13] performs reasonably well with only a few significant words to focus, it struggle to reproduce the same performance for the protein sequences. The reason for this is the vanishing of the attention scores caused due to large sub-set of significant protein words that form a functional context.

    For example, consider the distribution of attention scores as shown in Figures 1 and 2. In Figure 1, where there are only a few significant words in the sentence, the attention score is high (for ). However, when the number of significant words in a sentence is increased, the attention scores become small (see Figure 2). A further increase in the number of significant words will soon start to produce diminished attention scores.

    In the latter case (Figure 2), when the attention scores become too small, it may cause a restricted flow of information down the network. We call this problem, the “vanishing attention score” problem.

  • High variations in the attention distributions: The attention score distributions for different protein sequences, in general, varies significantly. This creates problem capturing the high variations among the allocated scores, making the training process less efficient.

Fig. 1: Case I: Distribution of attention scores with only a small number of significant words {w, w}.
Fig. 2: Case II: Distribution of attention scores with a relatively large number of significant words {w, w, w, w}.

To address the above discussed issues, a novel scaled-attention technique that produces attention scores via the scale-up operation is proposed in this work that helps eliminate the “vanishing attention score problem”, irrespective of the number of significant protein words in the protein sequence. The proposed solution also helps in reducing the high variation in the attention distribution. In light of the above discussion, the major contributions of the paper are:

  • A novel -scaled attention technique for the fast and efficient modeling of protein sequences. The parameter is useful in controlling the convergence of model while training. A high value of will push for fast convergence, while a low value of will cause slow convergence.

  • The proposed -scaled attention

    technique is next used to develop a general deep neural network architecture for the protein sequence, called the “

    -scaled attention network”, which is evaluated for the protein function prediction task implemented at the protein sub-sequence level.

  • Experiments on the datasets for biological process (BP) and molecular function (MF) demonstrate that the proposed -scaled attention technique has a major impact on improving the overall performance. Compared to its counterpart approach based on standard attention [13] with segment size = 100, it shows improvements in F1-score by a margin of +2.01% (BP) and +4.67% (MF), respectively. The corresponding improvements over state-of-the-art multi-segment based ProtVecGen-Plus approach [8] are +2.61% (BP) and +4.20% (MF), respectively.

  • Further, during training, the proposed -scaled attention technique converges faster (taking around half the amount of time taken by the baseline methods) and is efficient (in terms of very low difference between the training and validation losses) when compared to both the simple baseline and baseline+attention

    based classifiers (see Sec.

    4.3.2 for more details).

The organization of rest of the paper is as follows: Section 2 outlines the proposed -Scaled Attention Network, following which Section 3 discuss the use of the -Scaled Attention Network for protein function prediction. This follows results and discussion in Section 4. Finally, Section 5 concludes the paper.

2 -Scaled Attention Network

In this section, we introduce a novel -scaled attention network that can be applied efficiently for a wide range of classification tasks using protein sequences. However, in this paper the proposed network has been evaluated for the task of protein function prediction. The proposed architecture is composed of the following components: (1) Protein word embedding layer, (2) Deep recurrent processing layer, (3) Protein word-level -scaled attention layer, and (4) Dense output layer. Moreover, we also added dropout layers [17] before the deep recurrent processing and dense output layers, which helps avoid the over-fitting problem. The corresponding amount of disconnections with each dropout layers were taken as 0.3% and 0.2%, respectively. These are described in Sections 2.32.6. Before their description, the preliminary notations and protein sequence tokenization (as the pre-processing step) are discussed in Sections 2.12.2.

2.1 Notations

Let denote the set of ‘-terms (either for biological process or molecular function). Let denote a database of labeled protein sequences, where denotes a protein sequence and =

denotes one-hot encoding representing the set of

-term annotations corresponding to the protein sequence , such that if exhibits and 0 otherwise.

2.2 Protein Sequence Tokenization

The tokenization of protein sequences involves decomposing protein sequences into a string of amino acids or n-mers, which are assumed to be the protein words for the protein sequences. Since the length of protein sequences is highly variable in nature, for simplification, we assume the maximum permissible length for protein sequences to be max_seq_len. Protein sequences with length max_seq_len

are truncated, while padding is done for shorter protein sequences. The maximum number of possible protein words in a sequence is:

(1)

where, n is the size of n-mers / protein words.

2.3 Protein Word Embedding Layer

Tokenized protein sequences are passed through the embedding layer that helps in creating meaningful dense representations for the protein words in the sequence. The output from the embedding layer is a matrix of size [ x d], where denotes the maximum number of possible protein words in a sequence (Equation 1) and d denotes the embedding dimension.

2.4 Deep Recurrent Processing Layer

To accomplish the learning of the orderly dependencies between the protein words, deep recurrent neural networks (DRNN), such as GRU [18] / LSTM [19]

network, are used to process protein sequences. These networks are linear chain of recurrent units (e.g., GRU cell / LSTM cell) that process sequences, word-by-word, and produce a hidden state vector (

) corresponding to every protein word.

Let at any time-step (say ), represent the current input and represent the previous hidden state. Then, the recurrent unit accepts and as the input and produces as the corresponding output. Note that actually refers to the protein word embedding occurring at time-step . A brief description of both GRU and LSTM cells are given below.

a) Gated Recurrent Unit (GRU). The major components with the GRU cells are: update gate and reset gate. The update gate helps to measure the usefulness of the past information () for the current time-step . The output of the update gate, denoted , is described in Eq. 2. The reset gate, on the other hand, helps to forget the past information that may be irrelevant in the future. The output of the reset gate, denoted , is described in Eq. 3.

(2)
(3)

where,
, , , and denote weight matrices,
and

denote bias vectors.

The outputs and help generate the hidden state vector as follows:

(4)

where, denotes the current memory content and is computed as follows:

(5)

where,
and denote weight matrices,
denotes the bias vector, and
denotes the element-wise multiplication.

Fig. 3: -Scaled Attention Network: -scaled-attention based deep recurrent network for the protein sequences.

b) Long Short-Term Memory (LSTM). The key component of an LSTM cell is the memory element called cell state that maintains the global information. Other major components with an LSTM cell include: forget gate and input gate, which assist cell state in maintaining the global information. The forget gate assists cell state to forget the irrelevant past information. The output of the forget gate, denoted as , is described in Eq. 6. The input gate assists in adding new information to the cell state. The outputs of the input gate, denoted as and , are described in Eq. 7.

(6)
(7)

where,
, , and denotes weight matrices,
, , and denote bias vectors.

The cell state is updated as follows:

(8)

where, represents element-wise multiplication.

The hidden state is obtained based on the dot product between the updated cell state and the output logic . They are described as:

(9)
(10)

where, and are the weight matrix and bias vector respectively, with the output gate.

Bi-directional recurrent processing: In this work, the deep recurrent processing layer is implemented with the bi-directional configuration [20] comprising of forward and backward DRNN’s. The bi-directional property allows computing the hidden state for the protein words, which is based on both the preceding as well as the following words in a protein sequence. Let represent the ordered protein words in a protein sequence, where is the total number of time-steps equivalent to the maximum possible protein words in a sequence (Eq. 1).

The forward DRNN reads the protein sequences starting from to and the backward DRNN reads the protein sequences backwards from to . Let and represent the hidden states for the word obtained from the forward and backward DRNNs, respectively. The final hidden state for the protein word is the concatenation of and , i.e., = [, ]. The dimension of the hidden states from both the forward and backward DRNN is taken as 70.

2.5 -Scaled Attention Layer

Here we introduce a novel -scaled attention technique to address both the vanishing attention score problem and the variations in the attention score distribution. The proposed attention technique allows to find a good representation of the protein sequences by aggregating the weighted representation of their protein words.

Let represent the list of hidden state vectors for the protein sequence, where is the hidden state vector corresponding to the protein word occurring at time . Formally, the proposed -scaled attention technique can be defined as consisting of following steps:

  1. Each hidden state vector , where

    , is fed forward through a single layer perceptron to obtain the corresponding hidden representation

    as:

    (11)

    where, and represent the weight matrix and bias element, respectively.

  2. Next, the scaled attention score for the protein words occurring at time-steps is calculated as follows:

    (12)

    where,

    (13)
    (14)

    Equation 13 represents the standard softmax operation, which is used to find the distribution of attention scores . Following this in Eq. 14, the value of the maximum attention score is computed from the distribution. These are used in Eq. 12 to compute the scaled-up attention scores; the maximum possible scaled-up attention score is 1.

    The scaling operation in Eq. 12 creates the amplified attention scores, thereby, eliminating the vanishing attention score problem. Moreover, scaling also ensures that attention scores for each time-step are now on the same scale, which reduces the variations in the attention score distribution.

  3. Optionally, a scalar parameter “” can also be multiplied to the scaled attention scores: , which limits the maximum value a scaled attention score can take. Such restriction on the maximum possible scaled attention score is useful in controlling the training rate. A higher value of “” will push for faster training, while a lower value of “” will slow-down the training rate.

  4. The final vector for the protein sequence is the aggregation of the weighted hidden states. The weighted hidden states are computed by multiplying the scaled attention scores s with the corresponding hidden states s:

    (15)

The benefits of the proposed -scaled attention are as follows:

  • It helps overcome the vanishing attention score problem, which otherwise causes restricted flow of the information.

  • It further helps in reducing the high variations in the attention distribution, making the training procedure a more efficient one.

2.6 Dense Output Layer

Following the attention layer is a fully-connected dense output layer, where the number of neurons is equal to the number of

GO-terms (i.e., K). The output neurons are implemented with the sigmoid activations in order to deal with the multi-label scenario. Note that the protein function prediction is a multi-label classification problem. Other parameters include, “binary-cross entropy

” as the loss function and “

adam[21] as the gradient optimizer.

3 Protein Function Prediction

In this section, we discuss how we utilize the proposed -scaled attention network for the task of protein function prediction. We choose to use the -scaled attention network at the protein sub-sequence level for classifying protein sub-sequences. This is because, in general, protein words forming up the functional context(s) within the protein sequences do not share long-distance dependency. The advantage of using protein sub-sequences is also discussed in our earlier work [8]. The classified protein sub-sequences are eventually used to determine the function(s) of their parent sequence.

The proposed solution has the following steps: 1) Global Protein Sequence Representation: an approach to generate global representation for protein sequences based on small segments of protein sequences and the proposed -scaled attention network, and 2) Classification Model : a multi-layer perceptron (MLP) network based classifier to predict the function(s) of unlabeled protein sequences. These are described next.

3.1 Global Protein Sequence Representation

This is a two-step procedure: (1) Training a -scaled attention network using the protein segments, and (2) Utilizing the trained -scaled attention network for producing the global representation for the protein sequences.

3.1.1 Training a -scaled attention network using the protein segments

The segmented dataset is used to train the proposed -scaled attention network, which is constructed from the training dataset as follows:

  • As shown in Figure 4, each protein sequence with the label-set is decomposed into equal-sized protein segments. The set of such protein segments corresponding to protein is denoted as , where represents the segment of protein . Further, the decomposition happens in an overlapping manner. Padding is done to the last protein segment if it is smaller than the segment size.

  • Any two adjacent segments, say and , are chosen such that they have at least 50% of region in common.

  • Each segment is assigned the same label-set as the parent protein sequence . This constitutes the new segmented training dataset as , where and .

Next, the newly constructed segmented dataset is used for the training of the -scaled attention network as already discussed in Section 2.

Fig. 4: Protein sequence segmentation.

3.1.2 Utilizing the trained -scaled attention network for producing the representations for the protein sequences

The trained -scaled attention network based segment classifier from previous Section 3.1.1 is used to produce the global representation for protein sequences in both the training and testing datasets. This involves the following steps:

  • The protein sequences are decomposed into protein segments as discussed before in the Section 3.1.1. Let represent the segments for the protein sequence.

  • The decomposed protein segments from the set are classified using the pre-trained segment classifier. An output from the segment classifier is a -dimensional vector that represents the corresponding protein segment .

  • Next, the average of segment representations is computed to formulate the intermediate sequence representation as:

    (16)

    For the large number of long protein sequences, only a few segments are found supportive to the true output classes. The average operation, in general, may cause lowering of the support for the true output classes. Thus, the values of are scaled-up as follows:

    (17)

3.2 Classification Model

The problem of protein function prediction was modeled as a multi-label classification problem using the multi-layer perceptron (MLP) network. The MLP network consists of a single hidden layer followed by the output layer. The output neurons are implemented with the sigmoid activations in order to deal with the multi-label scenario. The K-dimensional feature representations for the protein sequences, as obtained in Section 3.1, are used for the training of the perceptron model. Other parameters include “binary-cross entropy” as the loss function and “adam” as the gradient optimizer.

4 Result and Discussion

This section presents the findings of the proposed system and discusses them in depth. The proposed network architecture was implemented using Keras 2.0.6

111https://github.com/keras-team/keras

with a TensorFlow backend.

4.1 Datasets

For evaluation purposes, we utilized the protein sequences from the UniProtKB database [22], where Gene Ontology (GO) [23] based terminology is used to indicate the output labels (i.e., protein functions) for the protein sequences. Protein sequence datasets with GO-terms defined across two different domains, Molecular Function (MF) and Biological Process (BP), were used for experiments. These two datasets are the same as the ones used in [8].

The BP dataset consists of 58310 protein sequences (with 295 GO-terms for the biological process), while the MF dataset comprises of 43218 protein sequences (with 135 GO-terms for the molecular function). In both the datasets, the number of protein sequences corresponding to each GO-term is 200. The datasets are available for download at https://bit.ly/2RMsyOV. For all the experiments, the datasets were split into the train and test datasets with 75% being in the training dataset, while the remaining 25% belonging to the testing dataset.

4.2 Metrics

The proposed solution was evaluated using metrics such as average precision, average recall, and average F1-score [24, 25]; each metric being computed by taking the average over the corresponding performances of each individual test sample in the test dataset, with test samples. Let be the set of true -terms and be the set of predicted -terms. Then, the used metrics are defined as:

  • Average Precision:

    (18)
  • Average Recall:

    (19)
  • Average F1-Score:

    (20)
-Scaled Attention Network
Seg. size Base. Base.+ Attn. = 1.0 = 0.7 = 0.5 = 0.3 = 0.1
Dataset : Biological Process
80 53.71 55.57 56.83 56.77 56.69 56.86 57.27
100 52.33 54.95 56.96 56.85 57.09 57.01 56.96
120 51.51 55.41 57.15 57.04 57.16 57.04 56.39
Dataset : Molecular Function
80 66.13 67.41 69.89 69.86 69.57 69.02 69.01
100 64.41 65.25 69.92 69.77 69.17 68.99 68.01
120 62.26 65.93 69.80 69.67 69.29 68.93 67.32
TABLE I: Classification Accuracy with LSTM (Base. = Baseline, Attn. = Attention)
-Scaled Attention Network
Seg. size Base. Base.+ Attn. = 1.0 = 0.7 = 0.5 = 0.3 = 0.1
Dataset : Biological Process
80 54.21 56.48 57.43 57.14 57.09 56.94 56.35
100 52.66 55.60 57.28 57.16 57.11 57.00 56.28
120 51.95 55.28 57.34 57.33 57.28 56.87 56.33
Dataset : Molecular Function
80 66.72 66.67 69.99 70.09 69.58 69.33 67.70
100 61.75 64.95 70.22 69.81 69.72 69.09 66.73
120 61.59 64.96 69.69 69.68 69.53 69.25 67.81
TABLE II: Classification Accuracy with GRU (Base. = Baseline, Attn. = Attention)
(a) Training Loss
(b) Validation Loss
Fig. 5: Biological Process: LSTM based training and validation loss curve.

4.3 -scaled attention network performance analysis

In this section, we comprehensively evaluate the effect of the proposed -scaled attention network onto the function prediction task by comparing it with the two baseline models used at the sub-sequence level. The baseline models are as follows:

  1. Baseline classifier: This model does not use attention technique. Specifically, it only uses the last hidden state from the deep recurrent processing layer, denoted , as the intermediate representation for the input protein segment.

  2. Baseline + attention-based classifier: This model includes the standard attention technique proposed by Bahdanau et. al. [13]. Intermediate representation for the input protein segment is the aggregation of the weighted hidden states.

There are two aspects to evaluation: (i) the performance evaluation based on the overall F1-scores, and (ii) the evaluation of the training process with respect to learning and convergence rate. Learning is the evaluation of the validation loss with respect to training loss. The lesser is the gap between the training loss and validation loss, the more effective learning is, and vice versa.

Moreover, we also evaluated the proposed -scaled attention network by varying the embedding dimensions for the protein words (discussed in Section 4.3.3). Note that the final classification model (i.e., an MLP network), already discussed in Section 3.2, is the same in all the experiments.

4.3.1 Superior overall performances

Here, overall performances (with respect to the F1-score) of the proposed -scaled attention network are evaluated against both the baseline models. Note that each of GRU and LSTM networks is explored as the deep recurrent processing layer in the above classifiers. The comparison results are shown in Tables I (with LSTM unit) and II (with GRU unit) for both the BP and MF datasets.

The experiments using the LSTM as the deep recurrent processing layer (Table I) demonstrate that the proposed -scaled attention network based classifiers (with = 1.0) are superior to both the baseline models. For the BP dataset, with segment sizes 80, 100, and 120, the improvements recorded over the baseline classifiers were +3.12%, +4.63%, and +5.64%, respectively. The corresponding improvements over the baseline+attention-based classifiers were +1.26%, +2.01%, and +1.74%, respectively.

Similarly, with the MF dataset (Table I), the observed improvements are as follows: compared to the baseline classifiers and with segment sizes 80, 100, and 120, improvements of +3.76%, +5.51%, and +7.54% were recorded, respectively. The corresponding improvements with respect to the baseline+attention-based classifiers are +2.48%, +4.67%, and +3.87%, respectively. Moreover, the above observations are also true for all the values with the proposed -scaled attention network based classifiers.

Further experiments using the GRU as the deep recurrent processing layer (Table II) also suggest the superior results with the proposed -scaled attention network based classifiers. These improvements hold irrespective of the value.

4.3.2 Efficient learning and faster convergence

The curves for the training and validation losses is used to study the effect of the proposed -scaled attention technique on the training efficiency and the convergence of the sub-sequence classifier. For a comparative analysis, the curves showing the training and validation losses of different classifiers are shown in Figures 5 (BP) and 6 (MF).

(a) Training Loss
(b) Validation Loss
Fig. 6: Molecular Function: LSTM based training and validation loss curve.

A high gap between the training and validation loss curves for the baseline classifiers (Figures 5 (BP) and 6 (MF)) demonstrate a poor learning. The learning, however, is improved for the baseline+attention-based classifiers, but losses are still high. A significant reduction in the losses are observed for the proposed -scaled attention network based classifiers, demonstrating the best learning among all.

Moreover, Figures 5 (BP) and 6 (MF) also highlight the faster convergence of the proposed -scaled attention network based classifier, when compared to both the baseline and baseline+attention based classifiers. The convergence with the proposed -scaled attention network takes roughly half the amount of time the other baseline methods are taking. The convergence of the classifiers under the proposed -scaled attention network, however, is delayed as the value is decremented.

LSTM GRU
Dataset Seg. Size d = 32 d = 48 d = 64 d = 80 d = 32 d = 48 d = 64 d = 80
Biological 80 56.74 58.93 59.73 60.17 57.43 59.32 59.69 59.96
Process 100 57.26 58.69 59.88 60.09 57.28 58.66 59.35 59.51
120 57.09 58.87 59.75 59.91 57.34 58.57 59.18 59.68
Molecular 80 69.73 71.58 71.61 71.59 69.99 70.63 71.53 71.39
Function 100 70.11 71.09 71.15 71.57 70.22 70.79 71.21 71.25
120 70.08 71.12 70.93 71.15 69.69 70.38 70.72 70.84
TABLE III: Classification Accuracy for the -scaled attention network based classifiers with ( = embedding dimension)
Biological Process (%) Molecular Function (%)
S. No. Approach Recurrent cell /embedding dimension Average Recall Average Precision Average F1-Score Average Recall Average Precision Average F1-Score
1 MLDA [7] 49.42 52.61 49.27 58.29 60.20 57.91
2 ProtVecGen-100 [8] LSTM/32 53.15 54.42 52.11 63.93 65.25 63.39
3 ProtVecGen-Plus [8] LSTM/32 56.42 56.65 54.65 66.93 67.42 65.91
4 ProtVecGen-Plus + MLDA [8] 58.19 58.80 56.68 68.62 68.27 67.12
5 Proposed-100* LSTM/32 58.35 59.96 57.26 70.86 71.95 70.11
6 Proposed-100* GRU/32 58.54 59.95 57.28 71.01 72.12 70.22
7 Proposed-100* LSTM/64 60.59 63.17 59.88 71.85 73.07 71.15
8 Proposed-100* GRU/64 59.95 62.71 59.35 71.95 73.06 71.21
TABLE IV: Overall Classification Accuracy (* mark indicates proposed methods)

4.3.3 With respect to parameter embedding dimension

Word embedding creates dense representation for the protein words, where each dimension represents a hidden attribute crucial to the meaning of the word. A higher dimension is often advantageous creating a more meaningful representations for the word, however, the increase in the dimension also causes a significant increase in the number of trainable parameters. Here, the proposed -scaled attention network based classifier is evaluated by varying the embedding dimension of the protein words. While the default embedding dimension (d) is , the proposed -scaled attention network based classifier (with ) is also evaluated for the other values of the . The results are shown in Table III with both GRU and LSTM as the deep recurrent layer. The number of trainable parameters when = 32 is 5.20 million.

As shown in Table III, the average F1-score increases significantly upon increasing the embedding dimension of the protein words from = 32 to = 48. The number of trainable parameters when = 48 is 7.76 million. With LSTM, for the biological process dataset, the margins of improvement are +2.19%, +1.43%, and +1.78% with segment sizes 80, 100, and 120, respectively. The corresponding improvements for the molecular function dataset are +1.85%, +0.98%, and +1.04%, respectively. Similar performance enhancements were observed with the GRU as the deep recurrent layer. Further increasing the embedding dimension (i.e., when = 64) only results in small improvements in the average F1-score, while a marginal/no improvement is observed by increasing the embedding dimension up to = 80.

We, therefore, choose to stick to the results obtained using = 64 (with 10.33 million trainable parameters), which has significantly less trainable parameters than = 80 (with 12.90 million parameters).

4.4 Performance comparison with work from the literature

In Table IV we have listed performances of the proposed method, denoted as proposed-100* (LSTM/32), along with the existing work from the literature for protein function prediction, where ‘LSTM’ and ‘32’ indicates recurrent cell and embedding dimension, respectively. The work from the literature includes: (i) MLDA approach [7], which is based on the tf-idf, (ii) ProtVecGen-100 [8], which is a single-segment based approach, (iii) state-of-the-art ProtVecGen-Plus [8], which is a multi-segment based approach, and (iv) ProtVecGen-Plus+MLDA [8], which is a hybrid approach.

Additionally, their performances were also evaluated with respect to their ability to deal with the protein sequences of different lengths. This is done by splitting the test datasets into the following groups based on their sequence lengths: G1: (0 – 200), G2: (201 – 500), G3: (501 – 1000), and G4: 1000. The range for each group specifies the permissible number of amino acid residues in a sequence. These are shown in Figures (a)a (BP) and (b)b (MF).

As shown in Table IV, compared to the classifier trained on the MLDA features [7], the classifiers trained with the proposed-100* (LSTM/32) features have improved significantly: the margins of improvement in the average F1-scores are +7.99% for the BP dataset and +12.20% for the MF dataset. The proposed-100* (LSTM/32) features also showed huge improvements when validated for the protein sequences in the groups G2, G3, and G4 (See Figures (a)a (BP) and (b)b (MF)).

In addition, the proposed-100* (LSTM/32) features effectively outperformed both the single-segment based ProtVecGen-100 [8] and a multi-segment based ProtVecGen-Plus [8] features with the respective improvements of +5.15% and +2.61% for the BP dataset. The corresponding improvements for the MF dataset are +6.72% and +4.20%, respectively. Besides this, the evaluation of the proposed-100* (LSTM/32) on protein sequences of different lengths, as shown in Figures (a)a (BP) and (b)b (MF), clearly establishes the superiority of the proposed features over them.

Finally, comparison with the hybrid ProtVecGen-Plus+MLDA [8] approach also shows better results for the proposed-100* (LSTM/32) features by margins of +0.58% (for BP) and +2.99% (for MF). The recall and precision values also exhibit similar behavior. Furthermore, the proposed-100* (LSTM/32) based classifier has again proven to be superior when validated against protein sequences of different lengths, as shown in Figures (a)a (BP) and (b)b (MF). The proposed-100* (GRU/32) features also exhibit similar improved performances. The best overall results, however, are observed with the proposed-100* (LSTM/64) and proposed-100* (GRU/62) features.

(a) Biological Process
(b) Molecular Function
Fig. 7: Length-wise Evaluation: For the proposed methods, the configurations are denoted in brackets.

5 Conclusion

In the paper, we explored the attention technique for the protein sequences that help in recognizing key protein words from the sequences. These keywords are useful in converting sequences to a meaningful representation for the classification task. However, in general, the standard attention technique suffers due to the weak semantics of the protein words. This poses a difficulty for the standard attention technique to construct a good representation for protein sequences. The novel challenges faced by the standard attention technique addressed within this work include (i) vanishing attention score problem and (ii) high variations in the attention distribution.

In order to deal with the above challenges, we introduced a novel -scaled attention technique to find a good representation of protein sequences, where is a parameter that assists in controlling the training process. The proposed attention technique helps not only in solving the vanishing attention score problem, but also helps in reducing the high variations in the attention distribution. This has a significant effect towards making the training more efficient and faster, when compared to the standard attention technique. The proposed -scaled attention technique is further used to build the -scaled attention network for the protein function prediction task at the protein sub-sequence level.

Overall, for the protein function prediction task, the proposed -scaled attention technique demonstrated a significant effect on the predictive performances outperforming other existing approaches by substantial margins. More importantly, performances across the protein sequences of different lengths are consistently maintained.

References

  • [1] Li, F., 2016. “Structure, function, and evolution of coronavirus spike proteins”. Annual review of virology, 3, pp.237-261.
  • [2] Walls, A., Park, Y.J., Tortorici, M.A., Wall, A., Mcguire, A. and Veesler, D., 2020. “Structure, Function, and Antigenicity of the SARS-CoV-2 Spike Glycoprotein”. Cell, 181(2), pp.281-292.
  • [3] Durmus Tekir, S., Cakir, T. and Ulgen, K., 2012. “Infection strategies of bacterial and viral pathogens through pathogen–human protein–protein interactions”. Frontiers in Microbiology, 3, p.46.
  • [4]

    Heinzinger, M., Elnaggar, A., Wang, Y., Dallago, C., Nechaev, D., Matthes, F. and Rost, B., 2019. “Modeling aspects of the language of life through transfer-learning protein sequences”.

    BMC bioinformatics, 20(1), p.723.
  • [5] Metzker, Michael L. “Sequencing technologies—the next generation.” Nature Reviews Genetics 11, no. 1 (2010): 31.
  • [6] Clark, Wyatt T., and Predrag Radivojac.“Analysis of protein function and its prediction from amino acid sequence.” Proteins: Structure, Function, and Bioinformatics 79.7 (2011): 2086-2096.
  • [7] Wang, H., Yan, L., Huang, H., & Ding, C. (2017). “From Protein Sequence to Protein Function via Multi-Label Linear Discriminant Analysis”. IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB), 14(3), 503-513.
  • [8] Ranjan, A., Fahad, M.S., Fernandez-Baca, D., Deepak, A. and Tripathi, S., 2019. “Deep Robust Framework for Protein Function Prediction using Variable-Length Protein Sequences”. IEEE/ACM transactions on computational biology and bioinformatics. doi: 10.1109/TCBB.2019.2911609.
  • [9] Hu, S., Ma, R. and Wang, H., 2019. “An improved deep learning method for predicting DNA-binding proteins based on contextual features in amino acid sequences”. PloS one, 14(11), p.e0225317.
  • [10] Yi, H.C., You, Z.H., Zhou, X., Cheng, L., Li, X., Jiang, T.H. and Chen, Z.H., 2019. “ACP-DL: a deep learning long short-term memory model to predict anticancer peptides using high-efficiency feature representation”. Molecular Therapy-Nucleic Acids, 17, pp.1-9.
  • [11]

    Cao, R., Freitas, C., Chan, L., Sun, M., Jiang, H. and Chen, Z., 2017. “ProLanGO: protein function prediction using neural machine translation based on a recurrent neural network”.

    Molecules, 22(10), p.1732.
  • [12] Ben-Hur, A. and Brutlag, D., 2006. “Sequence motifs: highly predictive features of protein function”. In Feature Extraction . Springer, Berlin, Heidelberg, pp. 625-645.
  • [13] Bahdanau, D., Cho, K. and Bengio, Y., 2014. “Neural machine translation by jointly learning to align and translate”. arXiv preprint arXiv:1409.0473.
  • [14] Yang, Z., Yang, D., Dyer, C., He, X., Smola, A. and Hovy, E., 2016, June. “Hierarchical attention networks for document classification”. In Proceedings of the 2016 conference of the North American chapter of the association for computational linguistics: human language technologies, pp. 1480-1489.
  • [15] Chen, H., Sun, M., Tu, C., Lin, Y. and Liu, Z., 2016, November. “Neural sentiment classification with user and product attention.”

    In Proceedings of the 2016 conference on empirical methods in natural language processing

    (pp. 1650-1659).
  • [16] Zhou, X., Wan, X. and Xiao, J., 2016, November. Attention-based LSTM network for cross-lingual sentiment classification. In Proceedings of the 2016 conference on empirical methods in natural language processing (pp. 247-256).
  • [17] Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I. and Salakhutdinov, R., 2014. “Dropout: a simple way to prevent neural networks from overfitting.” The Journal of Machine Learning Research, 15(1), pp.1929-1958.
  • [18] Cho, K., van Merriënboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H. and Bengio, Y., 2014, October. “Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation”. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP) (pp. 1724-1734).
  • [19] Hochreiter, S. and Schmidhuber, J., 1997.“Long short-term memory”. Neural computation, 9(8), pp.1735-1780.
  • [20] Graves, A. and Schmidhuber, J., 2005. “Framewise phoneme classification with bidirectional LSTM and other neural network architectures.” Neural Networks, 18(5-6), pp.602-610.
  • [21] Kingma, D.P. and Ba, J., 2014. “Adam: A method for stochastic optimization”. arXiv preprint arXiv:1412.6980.
  • [22] UniProt Consortium, 2014. “UniProt: a hub for protein information”. Nucleic Acids Research, 43(D1), pp.D204-D212.
  • [23] Ashburner, M., Ball, C.A., Blake, J.A., Botstein, D., Butler, H., Cherry, J.M., Davis, A.P., Dolinski, K., Dwight, S.S., Eppig, J.T. and Harris, M.A., 2000. “Gene Ontology: tool for the unification of biology”. Nature Genetics, 25(1), p.25.
  • [24] Sorower, M.S., 2010. “A literature survey on algorithms for multi-label learning”. Oregon State University, Corvallis, 18, pp.1-25.
  • [25] Zhang, M.L. and Zhou, Z.H., 2014. “A review on multi-label learning algorithms”. IEEE transactions on Knowledge and Data Engineering, 26(8), pp.1819-1837.