The study of the Protein-Protein Interaction (PPI) is crucial in understanding the biological process, such as DNA replication and transcription, metabolic pathways and cellular organization. Owing to this fact, several databases have been manually curated to cache protein interaction information such as MINT zanzoni2002mint , BIND bader2003bind , and SwissProt bairoch2000swiss in structured and standard formats. However, the rapid growth of biomedical literature has shown a significant gap between the availability of protein interaction article and its automatic curation. As such, a majority of the protein interaction information is still uncovered in the textual contents of biomedical literature. Moreover, the growth in biomedical literature is at an exponential pace. In the last 20 years, the overall size of MEDLINE has increased at a 4.2% compounded annual growth rate. There is 3.1% compounded annual growth rate in the number of new entries in MEDLINE database. MEDLINE currently has more than publications, which are more than three millions than those published in the last 5 years alone hunter2006biomedical . Hence, owing to the exponential rise lu2011pubmed ; khare2014accessing
and complexity of the biological information, the necessity for intelligent information extraction techniques to assist biologist in detecting, curating and maintaining database is becoming crucial. This has lead to a surge in the interest of Biomedical Natural Language Processing (BioNLP) community for automatic detection and extraction of PPI information.
Determining PPIs in the scientific text is the process of recognizing how two or more proteins in the given biomedical sentence are related. We exemplify interaction types between protein pairs in Table-1 where protein (Bnrlp-Rho4p) forms the interacting protein pair and the (Bnrlp-Rho1p) is non-interacted protein pairs.
|Sentence||Protein Entities||Interacted Protein Pair||Non-interacted Protein Pair|
|Bnrlp, Rho4p, Rho1p||Bnrlp-Rho4p||Bnrlp-Rho1p|
|Stat3, Stat1, Stat5, IL-10||Stat3-IL-10||
Majority of the existing systems look upon this task as a binary classification problem by identifying whether any interaction occurs between a pair of proteins or not. One of the most explored techniques for PPI task includes kernel-based method bunescu2005shortest ; airola2008graph . The potentiality of the kernel-based method is due to the virtue of a large amount of carefully crafted features. However, extraction of these features relies on the other NLP tools such as ABNER settles2005abner , MedT-NER saetre2007akane or PowerBioNE zhou2004recognizingyadav2017entity ; yadav2018multi ; yadav2016deep ; ekbal2016deep ; kumar2016recurrent , methods exploring latent features have emerged as strong alternative choices over the traditional machine learning based techniques. Some of the distinguished studies hua2016shortest ; choi2016extraction
for PPI extraction tasks utilize convolution neural networks (CNNs) which have shown significant performance improvements over the existing state-of-art techniques. Some other popular neural network based models for relation extraction have been reported inmiwa2016end ; liu2015dependency . But these systems are mostly applicable in identifying different relationships from newswire articles. Thus these approaches fail to produce a comparable performance on biomedical literature owing to the complexity of the biomedical text. Biomedical named entities do not have standard nomenclature, and this arbitrariness increases the difficulty in capturing the semantic relationships between the entities (proteins). Moreover, the different protein entities often have similar names making it more difficult to capture the contextual information.
Motivated by these observations, in this paper we propose a Shortest Dependency Path based Bi-directional Long Short Term Memory (sdpLSTM) architecture rnn1 to identify PPI pairs from the text. The proposed method differs from the previous studies in two facets: First, utilizing the dependency relationships between the protein names, we generate the Shortest Dependency Path (SDP) of the sentences. This facilitates us to create more syntax-aware inferences about the capabilities of the proteins in a sentence in comparison to the technique developed based on classical kernel-based method. Second, we investigate the significance of Part-of-Speech (PoS) and position embedding features in improved learning of the sdpLSTM. In contrast to the systems proposed by hua2016shortest and choi2016extraction , we employ LSTM based neural network models rnn1 instead of Convolutional Neural Network (CNN) lecun1995convolutional . In CNN, features are generated by performing pooling over the entire sentence based on continuous grams, where refers to the filter size. This puts constraints on longer sentences where long-term dependencies exist. Our method circumvents the shortcoming of CNN architecture by utilizing pooling techniques for encoding variable length features. In general, Bi-LSTM can keep track of preceding and succeeding words. As such, when pooling operation is performed on the output of sdpLSTM, we obtain optimal features from the entire sentence carrying the information not just on -grams but the complete context of the sentence.
In contrary, the existing methods miwa2016end ; peng2017deep generally consider a whole sentence as the input. The drawback of these existing techniques is that such representations fail to describe the relationships of two
target entities which appear in the same sentence at a far distance (i.e. long distant). Considering these problems, in our proposed technique we exploited dependency parsing related feature to examine the sentence and capture the Shortest Dependency Path to generate SDP based word embedding. In order to further inject the explicit linguistic information and boost the performance of the LSTM architecture, we have included the PoS information of SDP based words to assist the LSTM based network. The position w.r.t protein and part-of-speech (PoS) are prominent features in identifying the protein interaction information. PoS provides useful evidence that helps to detect important grammatical properties. Words assigned with same PoS posses similar syntactic behavior which provides an important clue to the system for inferencing the interaction between the protein pair.
The basic structure of a sentence can be obtained by determining the position of protein-word and the word occurring in its vicinity which provides pivotal clues to identify interactions in sentences. The extraction of SDP based word embeddings rather than full sentence embedding and its usage as an input to LSTM network in an amalgamation with the position and PoS feature is the core contribution of our proposed work.
The key attributes of the proposed work are summarized as below:
We propose the shortest dependency path based Bi-LSTM model (sdpLSTM) that provides state-of-the-art performance for relation extraction.
We explore latent features like Part-of-Speech (PoS) and position w.r.t proteins which were found to be effective in extracting protein-protein pairs.
We demonstrate that word embedding models learned on the PubMed, PMC and Wikipedia corpus is more powerful than the internal embedding model or the model trained on general corpus such as the news corpus111https://code.google.com/archive/p/word2vec/.
Evaluation on two different benchmark corpora, namely AiMed & BioInfer establishes the fact that our proposed approach is generic in nature. Please note that these two datasets were created by following different protein annotation guidelines.
2 Related Works
Pattern-based Model: Preliminary studies conducted by blaschke1999automatic and ono2001automated explored pre-specified patterns and rules for the PPI task. However, the system lacks in identifying complex cases such as complex relationships expressed in various coordinating and relational clauses. For sentences containing complex relations between three or more entities, the approach usually yields erroneous results. For example,
“The gap1 mutant blocked stable association of Ste4p with the plasma membrane, and the ste18 mutant blocked stable association of Ste4p with both plasma membranes and internal membranes.”
In huang2004discovering authors proposed a technique based on dynamic programming to automatically discover patterns. The system proposed in bunescu2005comparative
also studied the performance of rule-based algorithms. They developed two models, first one made use of rapier rule-based system and the other one relied on longest common subsequences.
Using Dependency Parsing: Here we describe the works that take into account more syntax aware approach such as full and partial (shallow) parsing. In the partial parsing, sentence structure is divided partially and dependencies are generated locally within the phrase. While in full parsing, the whole sentence is considered to capture dependencies, giuliano2006exploiting developed the system solely based on the shallow syntactic information. They further incorporated kernel functions to combine information from the entire sentence and the local contexts around the interacting entities. The work reported in erkan2007semi
focused on extracting the SDP between the protein pairs by defining the cosine similarities and edit distance function via semi-supervised learning. Some of the other prominent works include the studies conducted bymiyao2009evaluating and Garg . Other popular studies based on full parsing include the works as reported in temkin2003extraction ; daraselia2004extracting ; yakushiji2005biomedical .
Kernel-based Model: Bunescu and Mooney bunescu2005shortest first proposed the idea of using kernel methods to extract PPI based on the SDP. Some of the effective kernel-based techniques for PPI task include graph kernel airola2008all , bag-of-word (BoW) kernel saetre2007syntactic , edit-distance kernel erkan2007semi , all-path kernel airola2008graph and tree kernel Ma ; Zhang .
Deep Learning based Model: Recent studies show the applicability of deep learning models for the PPI task hua2016shortest ; choi2016extraction . The work reported in hua2016shortest made use of Convolutional Neural Network (CNN) for developing the PPI based system. choi2016extraction proposed a CNN based model utilizing several handcrafted features exploiting lexical, syntactic and semantic level information in combination with word embeddings.
In this study, we present a novel model to predict protein interaction pairs from the text. Our model leverages joint modeling of proteins and relations in a single model by exploiting Bi-directional Long Short Term Memory (Bi-LSTM) technique and propose a Shortest Dependency Path Bi-LSTM (sdpLSTM) model. Dependency between entities captures the information relevant for identifying the relations. Further, this architecture utilizes positional information of proteins in the sentence and the PoS embedding as the latent features in improved learning of sdpLSTM model. We begin by extracting SDP sentences and exploiting latent features. Embeddings are generated corresponding to each feature which is passed as input to the Bi-LSTM unit. We describe each phase in succeeding subsections.
3.1 Shortest Dependency Path (SDP)
The input to the sdpLSTM is the SDP between a protein pair. For this purpose, we exploit the dependency parse tree of the sentence. It describes the syntactic constituent structure of the sentence by annotating edges with dependency types, e.g. subject, auxiliary, modifier and captures the semantic predicate-argument relationships between the words. In general, bunescu2005shortest first proposed the idea of using dependency parse tree for relation extraction. They designed a kernel function exploring the shortest path between the entities to capture the relations. The main intuition behind this is based on the observation that shortest path reveals non-local dependencies within sentences which can help in capturing the relevant information from the sentence. The shortest path between the protein pair generally captures the essential information (aspects of sentence construction such as mood, modality and sometimes negation, which can significantly alter or even reverse the meaning of the sentence)
to identify their relationship. The approach proposed in culotta2004dependency was proved to be significantly better over the dependency tree kernel-based model. We follow this idea to use SDPs for extracting protein interacting pairs.
As illustrated in Figure 2, the word ‘bind’ in SDP carries important information to predict the interaction between the protein pair. The dependency relation bounded here is by verb argument and as interaction verb carries essential evidence in PPI. For PPI task, capturing these dependency relations is important.
For the purpose of extracting dependency relations, we use Enju Parser222http://www.nactem.ac.uk/enju/ which is a syntactic parser for English and can effectively analyze syntactic and/or semantic structures of biomedical text and provide with predicate-argument information. We have generated a graph for every sentence that contains at least two protein entities where each word corresponds to the node of the graph and the edges between the nodes (dependency relation) are obtained by the parser. We utilize Breadth First Search (BFS) algorithm lee1961algorithm to calculate the shortest distance between the protein pair. The words occurring between the SDP only takes part in the training instead of the whole words present in the sentences to generate SDP embedding.
|SDP Words||PoS||PoS Feature||
|Prot1||NN||10000000||[0.00171600 …0.0033500]||0||-6||0000000000||0000111111||[0.03141600 …0.9035500]||[0.1117600 …0.0223500]|
|regulator||NN||00000000||[0.99121600 …0.0233500]||1||-5||0000000001||0000011111||[0.77171600 …0.4858500]||[0.83191600 …-0.1133500]|
|between||IN||00100000||[0.25191600 …0.1739500]||2||-4||0000000011||0000001111||[0.33171600 …-0.8833500]||[0.58961600 …0.7189200]|
|Interaction||NN||10000000||[0.17171219 …0.7583350]||3||-3||0000000111||0000000111||[0.75171600 …0.5533500]||[0.99171600 …0.7633500]|
|and||CC||00100000||[0.17001600 …0.3030350]||4||-2||0000001111||0000000011||[0.78117600 …-0.033500]||[0.72171600 …0.1233500]|
|repression||NN||10000000||[0.17858500 …0.8835300]||5||-1||0000011111||0000000001||[0.45897600 …-0.0522500]||[0.7800100 …0.3311500]|
|Prot2||NN||10000000||[0.98581600 …0.0263500]||6||0||0000111111||0000000000||[0.77451600 …0.8985500]||[0.1745100 …0.3323500]|
3.2 Latent Feature Encoding Layer
Along with the SDP embedding, we design domain-independent features to assist our model in becoming more generic and adaptable. We explore PoS and position of each word as a feature. An exemplar illustration of latent feature encoding is provided in Table-2.
PoS Feature: This represents the PoS for each word occurring in the vicinity of SDP. We use Genia Tagger333http://www.nactem.ac.uk/GENIA/tagger/ to extract PoS information of each token. Every PoS tag is encoded as a unique eight dimension one hot vector which is fed to a neural network (NN) based encoder. Auto-encoder vincent2010stacked is employed to transfer the sparse PoS features to the dense real-valued feature vectors. This converts one-hot representation to dense feature representation of dimension . We use Adadelta optimizer adadelta
with loss function as a squared error to train our auto-encoder model.
Let represents the one hot vector of a PoS tag corresponding to each word. The auto-encoder learns the transition functions and such that reconstruction errors (squared errors) are minimized. The function and are called the encoder and the decoder function, respectively. Mathematically, it can be written as follows:
where , .
Position Feature: This feature helps us in identifying the significant interacting tokens between the two target protein entities. The position feature computes the relative distances of a word with respect to the protein mentions. We use the binary representation of the distance. We extract this feature on SDP of the target protein pairs. It is a two-dimensional tuple denoting distances of these tokens from the two target proteins. For e.g., consider the following sentence: ‘Prot1 regulator between interaction and repression Prot2’, the relative distances of the word ‘interaction’ with respect to Prot1 and Prot2 are and , respectively. Relative distances are then mapped to 10-dimensional binary vectors. From Table-3, we can observe that more attention is given to the words near to the protein mentions, particularly to the words occurring in the vicinity of 10 surrounding words. Moreover, words whose relative distances exceed 10 are all treated equally.
Intuitively, the words which are nearer to the target words are more informative than the farther words. We perform experiments to determine the optimal dimension by varying the distance (from 5 to 12) of more informative words with respect to proteins as shown in Table 4.3. We notice that the system performs well when the maximum relative distance of the informative word is within the range of w.r.t the protein term. As we follow the binary representation of distance, therefore, the position feature is represented using a feature vector of dimensions.
Similar to PoS feature, every position feature is encoded as a 10-dimensional vector which is fed into an auto-encoder. Using the learned auto-encoder model, we convert the sparse position feature vector to a dense real valued feature vector of dimension .
3.3 Embedding Layer
Word embedding persuades a real-valued latent semantic or syntactic vector for each word from a large unlabeled corpus by using continuous space language models tang2014evaluating . In embedding layer we obtain real-valued vector corresponding to each word of the SDP. Let us assume that we have a SDP sentence having size , a pre-trained word embedding matrix . A real-valued vector representation for given a word can be obtained as follows:
where is the one hot vector representation of the word . Thereafter, we augment the PoS and position embeddings (obtained from the previous layer) to the vector representation.
Where and are the PoS embedding and position embedding, respectively. The denotes the concatenation operator. In our work, we use publicly available word embedding444http://bio.nlplab.org/ (
dimensions) pre-trained on a combination of PubMed and PMC articles to the text extracted from a recent English Wikipedia dump. The performance of the word embedding depends on various hyperparameter setting such as vector dimension, context window size, learning rate, sample size etc. Pysallo et al.moen2013distributional has released this pre-trained biomedical embedding after the deep analysis of various hyperparameter setting that obtains optimal embedding. Utilizing the pretrained word embedding not only helps in minimizing the time cost but also helpful in obtaining the best optimal parameter.
3.4 Bi-LSTM unit
Bi-directional LSTM consists of three layers as discussed below:
3.4.1 Sequence Layer
The sequence layer takes the input from embedding layer and provides a linear sequence as output. Recurrent neural network (RNN) is a powerful technique to encode a sentence by capturing long term dependency. However, because of the long sequence it often suffers with vanishing or exploding gradient problemsrnn1 ; Pascanu2012UnderstandingTE . This problem can be overcome by gating and memory mechanism as introduced in LSTM hochreiter1997long . LSTM provides a different way to compute the hidden states.
The feature word sequence is represented by a bidirectional LSTM-RNNs rnn1 . The LSTM unit at -th word consists of an input gate , forget gate , an output gate , a memory cell and hidden state . The input to this unit is a -dimensional input vector , the previous hidden state , and the memory cell , and computes the new hidden states as follows:
denote the sigmoid function and element-wise multiplication, respectively. The, and
are the weight-matrix and bias vector, respectively. The LSTM unit atth word feature takes the input as the concatenation of word embedding, PoS embedding and position embeddings obtained from an auto-encoder: . We calculate the forward hidden state and backward hidden state . The final hidden state computed by augmenting both the hidden state .
3.4.2 Max-pooling Layer
The max-pooling layer takes into account the hidden states of all the words in the SDP, instead of only the last word’s hidden state. Max pooling takes the maximum over a position of the entire SDP sentence, hidden state obtained by concatenating the forward hidden stateand backward hidden state . Let us assume that a SDP sentence of length has a sequence of hidden states, , then max-pooling layer computes the pooled hidden state at position as follows :
Finally, the pooled hidden state for a SDP sentence is calculated by concatenating each of the pooled hidden state position values. The will be , where denotes the dimension of the hidden state.
3.4.3 Multilayer Perceptron (MLP) Model
The output of sequence layer is fed into a fully connected layer with number of hidden layers. More formally, given a sequence layer output , number of hidden layers , network calculates output as follows:
where is the weight matrix between the output of sequence layer and hidden layer; is a bias term vector.
Thereafter, the output is transformed into by augmenting with a weight matrix , where is the number of required labels. In our case the value of .
Finally, the transformed output
is fed into the softmax layer. The softmax layer provides the output probability of each label. Mathematically, it can be written as follows:
The proposed model is evaluated on the two popular benchmark corpora for PPI, namely
AiMed and BioInfer555http://corpora.informatik.hu-berlin.de/. AiMed dataset is generated from abstracts extracted from the Database of Interacting Protein (DIP). It contains
sentences with the protein entities, manually tagged with the PPI interaction relations. This is recognized as the standard dataset for PPI extraction task.
The BioInfer corpus created by the Turku BioNLP group666http://bionlp.utu.fi/ consists of sentences. In our work, we assume the protein interacted pair as the positive instance and non-interacted pair as the negative instance. To identify the negative instances which are not directly given in the dataset, we assume all the possible pairs of proteins that are possible in a given sentence and consider those protein-pairs to be negative instances whose relations are not given in the sentence. Thereby, we obtain negative instances and positive instances for AiMed corpus. Similarly, in case of BioInfer corpus, we obtain negative instances over positive interactions. It can be observed that both the datasets are imbalanced as they are strongly biased towards the negative examples. Statistics of these datasets are shown in Table-4.
The protein entities are generalized with the protein IDs to make the model insensitive towards biases associated with the names of the proteins. This makes every protein unique and avoids the model to learn highly interacting protein pairs. We perform tokenization with the help of Genia Tagger777http://www.nactem.ac.uk/GENIA/tagger/. The tokenized sentence is parsed with the Enju parser to obtain the dependency relations.
|Dataset||Interacted Pair||Non-interacted Pair||Ratio|
4.3 Network Training and Hyper-parameters
The objective of training the LSTM model is to minimize the binary cross entropy cost function. It can be written as follows:
Here, is the set of input SDP sentence in the training dataset, and is the corresponding set of labels for those SDP sentences. The denote the output of the MLP layer. The gradient-based optimizer is used to minimize our cost function described in Eq-9. We have used Adam adam , an adaptive learning rate based optimizer, to update the parameters throughout training. To avoid over-fitting, the network dropout srivastava2014dropout mechanisms are used with a dropout rate of .
The hyper-parameter values were determined from the preliminary experiments by evaluating the model performance for -fold cross-validation. The proposed model described in Section-3
is implemented using Keras888https://keras.io/
. We have chosen Tensorflow as backend machine learning library. We tune our model for various hyper-parameters of the LSTM architecture including the number of LSTM units, dropout ratio, number of epochs and different optimization algorithms etc. for both the datasets. We obtain the best results for both the AiMed and BioInfer datasets on a set of optimized network hyper-parameters. This reflects the generalization of our optimum hyper-parameter selection over two completely different datasets. Table4.3 provides the details about the optimal hyperparameter settings using 10-fold cross validation experiments.
|Activation Function||F-Score (AiMed)||F-Score (BioInfer)|
4.4 Analysis of Hyper-parameter Settings
We setup all the experiments by varying the hyper-parameter values and analyze the behaviors of our model. For AiMed dataset, we observed that addition of LSTM units improves the model performance to a certain extent. Thereafter, it keeps on decreasing gradually. We define an optimal value for the same, via cross-validation experiment. It was observed that
deep MLP layer helps to improve the overall performance of the model when compared to a shallow MLP layer as shown in Figure 3. However, this improvement was dependent upon the size of the output layer, which increases initially from to and then starts decreasing while the size of the output layer is increased to . We also notice that stacking of another MLP layer makes our model complex, thus reducing the performance on a cross-validation setting.
In case of BioInfer dataset, we also observe quite a similar trend in performance with the addition of LSTM units, size of MLP output layer and stacking of another MLP layer. The optimal values are reported in Table-4.3. We also analyze the performance of our model on the number of epochs for which training was performed on both the datasets. On AIMed dataset, the value of F1-score initially shows a short fall with the increasing number of epochs from to and then shows regular growth with the increasing number of epochs from to , and finally a dip on further increasing the number of epochs to and . This can be attributed to the fact that training on a very large number of epochs makes our model over-fitted and hence the cross-validation accuracy decreases. There has also been an initial decline in the F1-score in case of BioInfer dataset but then there has been steady increase with the increase in the number of epochs. We achieve the optimum performance with the same number of epochs () for both the datasets. To further compare the performance of ReLU with Sigmoid, we have also conducted the experiments considering both ReLU and Sigmoid on both the datasets. On both the datasets, the sigmoid function was found to be superior over ReLU. The results are reported in Table-5.
4.5 Evaluation on Benchmark Datasets
In the recent years, different kernel-based techniques and SVM based model were adopted as baselines against the deep learning CNN based model for the PPI task. It has been shown how deep learning based models perform superior compared to the feature based models kim:2014:EMNLP2014 ; choi2016extraction . As such, in order to make an effective comparison of our proposed approach, we design two strong baselines based on neural network architecture as follows:
The first baseline model is constructed by training a multi-layer perceptron on the features obtained from the embedding layer as defined in subsection-3.3. The sentence embedding is generated by the concatenation of every PoS and position augmented word embeddings to SDP embedding.
Thereafter, is fed into MLP layer described in Subsection-3.4.3.
Baseline 2: Our second baseline is based on the more advanced sentence encoding techniques, RNN. The SDP sentence encoding can be generated as follows:
where is a sigmoid function,
denotes the hidden representation ofword in the SDP sentence. , , and
are the network parameters. Similar to Baseline 1, MLP layer is used to classify a SDP sentence into one of the two classes,viz: ‘interacting pair’ and ‘non-interacting pair’.
We perform 10-fold cross validation on both the datasets. With no official development data set available, cross validation seems to be the most reliable method of evaluating our proposed model. To evaluate the performance of our model, we use standard recall, precision, and F1-score. The detailed comparative analysis of our proposed model (sdpLSTM) over these baselines and state-of-art systems are reported in Table-6. The obtained results clearly show the effectiveness of our proposed sdpLSTM based model over the other models exploring neural network architectures or conventional kernel or SDP based machine learning model. The optimum performance is achieved with epochs for both the datasets as depicted in Figure-4.4. Statistical significance tests verify that improvements over both the baselines are statistically significant as (p-value ). In our proposed model we obtain the significant F1-score improvement of and points over the first two baselines for the AiMed dataset, respectively. On BioInfer dataset, our system shows the F1-Score improvement of and points over these two baselines, respectively.
5.1 Comparative Analysis with Existing Methods
In order to perform the comparative analysis with the existing approaches, we choose the recent approach exploiting neural network model. We explore other approaches utilizing SVM based kernel methods and word embedding feature as shown in Table-6. We observe that sdpLSTM significantly performs better than all the state-of the-art techniques for both the datasets. From this, we can conclude that sdpLSTM is more powerful in extracting protein interacted pairs over the CNN based architecture developed in hua2016shortest and choi2016extraction . We further make an interesting observation that incorporating the latent features embedded into the neural network based architecture improves the performance of the system.
Our proposed model attains an improvement of F-score point (c.f. Table 8) over the model proposed in choi2016extraction for the AiMed dataset. However, it should be noted that DCNN model made use of a significant number (total ) of domain dependent lexical, syntax and semantic level features. In contrast to this our model is more generic in the sense that we use only PoS and position features. We further re-implemented the DCNN system and evaluated it on both the datasets. Evaluation (c.f. Table 8) shows that our proposed model attains better performance for both the datasets. We also re-implemented the system reported in li2015approach
to obtain the precision and recall values. The observed improvements over the existing systems are statistically significant (asp-value ).
|Baseline 1||MLP (SDP+Feature Embedding)||59.73||75.93||66.46||68.56||72.05||70.22|
|Baseline 2||RNN (SDP+Feature Embedding)||66.23||74.72||70.22||71.89||74.59||73.21|
|Proposed Model||sdpLSTM (SDP+Feature Embedding)||91.10||82.2||86.45||72.40||83.10||77.35|
|qian2012tree||Single kernel+ Multiple Parser+SVM||59.10||57.60||58.10||63.61||61.24||62.40|
|li2015approach||Multiple kernel+ Word Embedding+ SVM||-||-||69.70||-||-||74.0|
|li2015approach||Multiple kernel+ Word Embedding+ SVM||67.18||69.35||68.25||72.33||74.94||73.61|
|choi2010simplicity||Tuned tree kernels +SVM||72.80||62.10||67.00||74.5||70.9||72.60|
5.2 Effects of Feature Combination
In this section, we analyze the significance of each feature by performing feature augmentation (including feature one by one) as shown in Table-7. We begin by examining only SDP embedding. It can be observed that SDP based embedding alone shows a remarkable performance of and F1-Score on AiMed and BioInfer dataset, respectively.
This clearly shows the significance of SDP based embedding in identifying protein interacted pairs. However, we observe that inclusion of PoS and position embedding does not have any positive impact on the AiMed dataset.
In fact, there have been drops in F1-score by point when PoS feature is added and point when position embedding feature is included. This might be due to the data sparseness problem with the lack of training data. In case of BioInfer dataset, PoS tagging is comparatively less informative, but still boosts the F1-score by F1-score points. The inclusion of position embedding, however, shows very minor improvement of F-score points.
The reason is while adding a position to PoS feature, it helps as we have PoS tag information (which is NNP) of the closest potential entity.
We analyze that, the improvements are not simply due to combining the features to SDP embedding. This suggests that these information sources are complementary to each other in some linguistic aspects. We closely investigate the outputs of AiMed dataset produced in our system and summaries with the following observations:
PoS distribution: Protein names are mainly noun phrases. For the AiMed dataset, we observed that the multi-word proteins were not properly tagged as the noun phrases. This has encountered some errors which propagated when we have introduced the PoS alone as a feature to the LSTM model.
Presence of protein interacted words: The presence of protein interacted words (inhibit, regulated, interaction etc.) provides an important clue to identify the interaction of proteins. When the system takes SDP as input, we observe that in some cases the PoS tagger is unable to tag the interacted words as verbs. We did quantitative analysis and found that the PoS tagger could not tag verb phrases correctly in SDP sentences out of total sentences. This could be one of the reasons that system performance is comparable when we use the PoS information alone as a feature.
Position feature: The position feature is a helpful feature to capture the most important words occurring in the vicinity of the protein words. We observe that length of input SDP sentences for incorrectly classified instances were higher compared to the correctly classified SDP sentence. Another observation of misclassified instances was, the existence of multiple protein entities in such cases.
We further investigate the reason for a decrease in the recall value on both the dataset by the incorporation of position embedding to SDP as shown in Table-7. Our analysis revealed that position embedding feature was unable to capture the implicit form of PPI information that occurs in the vicinity of the window size. For example, “In cells lacking the myosin-II heavy chain, the bundles, which were induced by an over-expression of cofilin, shortened and became straight following hyperosmotic stress, forming a polygonal structure.” In this example, there is no explicit mention of the protein interacted verb information, like ‘bind’, ‘interact’ etc. which makes difficult to capture the relevant words via position embeddings. We identified that a total of , cases where our system fails to capture the implicit form of protein information and incorrectly predicting it as non-interacting pair on BioInfer and AiMed dataset respectively. Interestingly, combination of all the features modestly improves the performance of the system by and F-score points on BioInfer and AiMed dataset, respectively. We observe that when the model is evaluated on the less epoch, performance improvement with the addition of features is -. While increasing the epoch vanishes the impact of additional features.
5.3 Error Analysis
In this subsection, we analyze different sources of errors which lead to misclassification. We closely study the false positive and false negative instances and come up with following observations:
(1) When Enju dependency parser fails to capture dependencies, the error is propagated to BFS algorithm as such it does not return any valid SDP. For example, in the given sentence
“The ProtId1 or ProtId2 family is targets of cytokines and other agents that induce HIV-1 gene expression”, the SDP output mentioned are “ProtId1 and ProtId2” and “ProtId1 family ProtId2”. It should be noted that this is a negative example and our SDP fails to capture the context. This hampers our accuracy significantly.
(2) Interaction bearing words carry important information to identify protein interacted pairs such as bind, interact, inhibit. However, when interaction bearing words appear in negative context, system fails to properly classify it as non-interacted protein pair. For e.g. “in GSK-3 inhibitors suppressed Sema4D-induced growth”, inhibit does not occur here in context of PPI.
6 Conclusion and Future Works
In this article, we have proposed an efficient model based on deep learning technique for PPI. The model makes use of SDP embeddings as feature. In addition it also exploits the latent PoS and position embedding features to complement the SDP embedding.
The main contribution of the proposed
methodology is the systematic integration of word embeddings learned from the biomedical literature and the use of SDP between protein pairs into the deep sdpLSTM architecture.
Bio-medical word embedding was observed to capture semantic information
more effectively than internal embedding. By employing SDP
and LSTM, the proposed approach could make full use of structural information.
Our comprehensive experimental results on two benchmark biomedical corpora, AiMed and BioInfer demonstrated that (i) the SDP based word embedding input is effective to describe protein-protein relationship in PPI
task; (ii) the LSTM architecture is useful to capture the long contextual
and structure information; and (iii) high-quality pretrained word
embedding is important in PPI task. The obtained results depict the superiority of sdpLSTM over the complex state-of-art approaches leveraging CNN and several higher level features with the significant F1-score improvements of and points on AiMed and BioInfer dataset, respectively.
In future, we would like to validate our approach on other relation extraction tasks such as drug-drug interaction, chemical-protein interaction by overcoming the possible errors. Multi-layer Bi-LSTM model has shown tremendous success in machine translation. One interesting direction of future work will be to develop a multi-layer Bi-LSTM model for relation extraction. Further, owing to the capability of attention mechanism, we would experiment by attentive pooling.
- (1) A. Zanzoni, L. Montecchi-Palazzi, M. Quondam, G. Ausiello, M. Helmer-Citterich, G. Cesareni, Mint: a molecular interaction database, FEBS letters 513 (1) (2002) 135–140.
- (2) G. D. Bader, D. Betel, C. W. Hogue, Bind: the biomolecular interaction network database, Nucleic acids research 31 (1) (2003) 248–250.
- (3) A. Bairoch, R. Apweiler, The swiss-prot protein sequence database and its supplement trembl in 2000, Nucleic acids research 28 (1) (2000) 45–48.
- (4) L. Hunter, K. B. Cohen, Biomedical language processing: what’s beyond pubmed?, Molecular cell 21 (5) (2006) 589–594.
- (5) Z. Lu, Pubmed and beyond: a survey of web tools for searching biomedical literature, Database 2011.
- (6) R. Khare, R. Leaman, Z. Lu, Accessing biomedical literature in the current information landscape, Biomedical Literature Mining (2014) 11–31.
- (7) R. C. Bunescu, R. J. Mooney, A shortest path dependency kernel for relation extraction, in: Proceedings of the conference on human language technology and empirical methods in natural language processing, Association for Computational Linguistics, 2005, pp. 724–731.
- (8) A. Airola, S. Pyysalo, J. Björne, T. Pahikkala, F. Ginter, T. Salakoski, A graph kernel for protein-protein interaction extraction, in: Proceedings of the workshop on current trends in biomedical natural language processing, Association for Computational Linguistics, 2008, pp. 1–9.
- (9) B. Settles, Abner: an open source tool for automatically tagging genes, proteins and other entity names in text, Bioinformatics 21 (14) (2005) 3191–3192.
- (10) R. Sætre, K. Yoshida, A. Yakushiji, Y. Miyao, Y. Matsubayashi, T. Ohta, Akane system: protein-protein interaction pairs in biocreative2 challenge, ppi-ips subtask, in: Proceedings of the second biocreative challenge workshop, Vol. 209, Madrid, 2007, p. 212.
- (11) G. Zhou, J. Zhang, J. Su, D. Shen, C. Tan, Recognizing names in biomedical texts: a machine learning approach, Bioinformatics 20 (7) (2004) 1178–1190.
S. Yadav, A. Ekbal, S. Saha, P. Bhattacharyya, Entity extraction in biomedical corpora: An approach to evaluate word embedding features with pso based feature selection, in: Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, Vol. 1, 2017, pp. 1159–1170.
- (13) S. Yadav, A. Ekbal, S. Saha, P. Bhattacharyya, A. Sheth, Multi-task learning framework for mining crowd intelligence towards clinical treatment, in: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), Vol. 2, 2018, pp. 271–277.
- (14) S. Yadav, A. Ekbal, S. Saha, P. Bhattacharyya, Deep learning architecture for patient data de-identification in clinical records, in: Proceedings of the Clinical Natural Language Processing Workshop (ClinicalNLP), 2016, pp. 32–41.
A. Ekbal, S. Saha, P. Bhattacharyya, et al., A deep learning architecture for protein-protein interaction article identification, in: Pattern Recognition (ICPR), 2016 23rd International Conference on, IEEE, 2016, pp. 3128–3133.
- (16) A. Kumar, A. Ekbal, S. Saha, P. Bhattacharyya, et al., A recurrent neural network architecture for de-identifying clinical records, in: Proceedings of the 13th International Conference on Natural Language Processing, 2016, pp. 188–197.
- (17) L. Hua, C. Quan, A shortest dependency path based convolutional neural network for protein-protein relation extraction, BioMed Research International 2016.
- (18) S.-P. Choi, Extraction of protein–protein interactions (ppis) from the literature by deep convolutional neural networks with various feature embeddings, Journal of Information Science (2016) 0165551516673485.
- (19) M. Miwa, M. Bansal, End-to-end relation extraction using lstms on sequences and tree structures, arXiv preprint arXiv:1601.00770.
- (20) Y. Liu, F. Wei, S. Li, H. Ji, M. Zhou, H. Wang, A dependency-based neural network for relation classification, arXiv preprint arXiv:1507.04646.
A. Graves, Generating sequences with
recurrent neural networks, CoRR abs/1308.0850.
- (22) Y. LeCun, Y. Bengio, et al., Convolutional networks for images, speech, and time series, The handbook of brain theory and neural networks 3361 (10) (1995) 1995.
- (23) Y. Peng, Z. Lu, Deep learning for extracting protein-protein interactions from biomedical literature, arXiv preprint arXiv:1706.01556.
- (24) C. Blaschke, M. A. Andrade, C. A. Ouzounis, A. Valencia, Automatic extraction of biological information from scientific text: protein-protein interactions., in: Ismb, Vol. 7, 1999, pp. 60–67.
- (25) T. Ono, H. Hishigaki, A. Tanigami, T. Takagi, Automated extraction of information on protein–protein interactions from the biological literature, Bioinformatics 17 (2) (2001) 155–161.
- (26) M. Huang, X. Zhu, Y. Hao, D. G. Payan, K. Qu, M. Li, Discovering patterns to extract protein–protein interactions from full texts, Bioinformatics 20 (18) (2004) 3604–3612.
R. Bunescu, R. Ge, R. J. Kate, E. M. Marcotte, R. J. Mooney, A. K. Ramani, Y. W. Wong, Comparative experiments on learning information extractors for proteins and their interactions, Artificial intelligence in medicine 33 (2) (2005) 139–155.
- (28) C. Giuliano, A. Lavelli, L. Romano, Exploiting shallow linguistic information for relation extraction from biomedical literature., in: EACL, Vol. 18, Citeseer, 2006, pp. 401–408.
- (29) G. Erkan, A. Özgür, D. R. Radev, Semi-supervised classification for extracting protein interaction sentences using dependency parsing., in: EMNLP-CoNLL, Vol. 7, 2007, pp. 228–237.
- (30) Y. Miyao, K. Sagae, R. Sætre, T. Matsuzaki, J. Tsujii, Evaluating contributions of natural language parsers to protein–protein interaction extraction, Bioinformatics 25 (3) (2009) 394–400.
S. Garg, A. Galstyan, U. Hermjakob, D. Marcu,
biomolecular interactions using semantic parsing of biomedical text, in:
Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence,
AAAI’16, AAAI Press, 2016, pp. 2718–2726.
- (32) J. M. Temkin, M. R. Gilder, Extraction of protein interaction information from unstructured text using a context-free grammar, Bioinformatics 19 (16) (2003) 2046–2053.
- (33) N. Daraselia, A. Yuryev, S. Egorov, S. Novichkova, A. Nikitin, I. Mazo, Extracting human protein interactions from medline using a full-sentence parser, Bioinformatics 20 (5) (2004) 604–611.
- (34) A. Yakushiji, Y. Miyao, Y. Tateisi, J. Tsujii, Biomedical information extraction with predicate-argument structure patterns, in: Proceedings of the first International Symposium on Semantic Mining in Biomedicine (SMBM), Hinxton, Cambridgeshire, UK, April, 2005.
- (35) A. Airola, S. Pyysalo, J. Björne, T. Pahikkala, F. Ginter, T. Salakoski, All-paths graph kernel for protein-protein interaction extraction with evaluation of cross-corpus learning, BMC bioinformatics 9 (11) (2008) S2.
- (36) R. Sætre, K. Sagae, J. Tsujii, Syntactic features for protein-protein interaction extraction., LBM (Short Papers) 319.
C. Ma, Y. Zhang, M. Zhang,
protein-protein interaction extraction considering both modal verb phrases
and appositive dependency features, in: Proceedings of the 24th
International Conference on World Wide Web, WWW ’15 Companion, ACM, New York,
NY, USA, 2015, pp. 655–660.
M. Zhang, J. Zhang, J. Su, G. Zhou,
A composite kernel to
extract relations between entities with both flat and structured features,
in: Proceedings of the 21st International Conference on Computational
Linguistics and the 44th Annual Meeting of the Association for Computational
Linguistics, ACL-44, Association for Computational Linguistics, Stroudsburg,
PA, USA, 2006, pp. 825–832.
- (39) A. Culotta, J. Sorensen, Dependency tree kernels for relation extraction, in: Proceedings of the 42nd annual meeting on association for computational linguistics, Association for Computational Linguistics, 2004, p. 423.
- (40) C. Y. Lee, An algorithm for path connections and its applications, IRE transactions on electronic computers (3) (1961) 346–365.
P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, P.-A. Manzagol, Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion, Journal of Machine Learning Research 11 (Dec) (2010) 3371–3408.
M. D. Zeiler, Adadelta: An adaptive
learning rate method, CoRR abs/1212.5701.
B. Tang, H. Cao, X. Wang, Q. Chen, H. Xu, Evaluating word representation features in biomedical named entity recognition tasks, BioMed research international 2014.
- (44) S. Moen, T. S. S. Ananiadou, Distributional semantics resources for biomedical text processing (2013).
- (45) R. Pascanu, T. Mikolov, Y. Bengio, Understanding the exploding gradient problem, CoRR abs/1211.5063.
- (46) S. Hochreiter, J. Schmidhuber, Long short-term memory, Neural computation 9 (8) (1997) 1735–1780.
D. P. Kingma, J. Ba, Adam: A method for
stochastic optimization, CoRR abs/1412.6980.
- (48) N. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever, R. Salakhutdinov, Dropout: a simple way to prevent neural networks from overfitting., Journal of Machine Learning Research 15 (1) (2014) 1929–1958.
Y. Kim, Convolutional neural
networks for sentence classification, in: Proceedings of the 2014 Conference
on Empirical Methods in Natural Language Processing (EMNLP), Association for
Computational Linguistics, Doha, Qatar, 2014, pp. 1746–1751.
- (50) L. Li, R. Guo, Z. Jiang, D. Huang, An approach to improve kernel-based protein–protein interaction extraction by learning from large-scale network data, Methods 83 (2015) 44–50.
- (51) L. Qian, G. Zhou, Tree kernel-based protein–protein interaction extraction from biomedical literature, Journal of biomedical informatics 45 (3) (2012) 535–543.
- (52) Z. Zhao, Z. Yang, H. Lin, J. Wang, S. Gao, A protein-protein interaction extraction approach based on deep neural network, International Journal of Data Mining and Bioinformatics 15 (2) (2016) 145–164.
- (53) D. Tikk, P. Thomas, P. Palaga, J. Hakenberg, U. Leser, A comprehensive benchmark of kernel methods to extract protein–protein interactions from literature, PLoS computational biology 6 (7) (2010) e1000837.
- (54) S.-P. Choi, S.-H. Myaeng, Simplicity is better: revisiting single kernel ppi extraction, in: Proceedings of the 23rd International Conference on Computational Linguistics, Association for Computational Linguistics, 2010, pp. 206–214.