With the explosion of text in the biomedical literature, a wealth of valuable knowledge hide in the text. Applying natural language processing (NLP) technique such as relation extraction can help knowledge base curation, which is an urgent problem given that manual curation will always lags behind the fast growth of literature. Deep learning models have great power to learn the representation of the training data and have made impressive success in many domains including natural language processing
. Even though neural network models have achieved state-of-the-art performance on many problems, in some cases the performance is limited by the size of the training sets. Usually, the large amount of parameters in deep neural networks need large labeled dataset to train, but most new problems only have small labeled dataset available. The best way of acquiring large labeled dataset is to utilize human effort to put labels on the training data, but it is often not feasible since the labeling process could only be done by domain expertise. In this work, we will investigate to use adversarial learning to alleviate the problem of insufficient training data, specifically for relation extraction.
Adversarial method is a technique of designing malicious inputs to fool machine learning models. Those malicious inputs (often called adversarial examples) are usually acquired by adding a small perturbation on the original inputs, and the process of model training using adversarial examples falls into the category of adversarial training (AT). Adversarial training in supervised learning scenario could strength the robustness and generality of the model since it involves both benign and malicious data in the training process. During the training, adversarial learning utilizes an extra loss on the adversarial examples using the same label as the corresponding example from the training data .
A variation of adversarial training called virtual adversarial training (VAT) was introduced in 
to involve unlabeled data in the model training process. Other than adding adversarial examples to corresponding instances of the labeled dataset, adversarial examples are added on instances from a potentially larger unlabeled dataset. Because of the unlabeled aspect of these instances, an alternate loss function is defined. Both adversarial and virtual adversarial training can be seen as regularization method as an extra loss is added to the original loss of the model as a regularization term, which will be discussed in details later. Virtual adversarial training can also be seen as a semi-supervised learning method since it involves unlabeled data in the model training.
Adversarial training technique was first introduced in the computer vision field. In recent years, adversarial training has also been applied on tasks in NLP domain such as text classification  and relation extraction , but very limited work has been done. For example, in , they only applied adversarial training on noisy data from MIML framework  and only perturbation on specific part of the input features is explored. In addition, virtual adversarial training has been only applied on text classification task in NLP field, but not on relation extraction.
In this paper, we apply adversarial training on relation extraction in the standard setting. As far as we know, this is the first work to introduce virtual adversarial training on the relation extraction tasks. To verify the effectiveness of our method, we will test it on two widely-studied relation extraction tasks in the BioNLP domain, the protein protein interaction (PPI) task  and the protein subcellular localization (PLOC) task  using three well-known benchmark datasets.
In summary, we investigate adversarial training in the standard setting of relation extraction as well as apply virtual adversarial training for relation extraction. We also conduct additional experiments that might shed light on adversarial and virtual adversarial training: a). involving multiple adversarial examples during training; b). adding perturbation on all input features of the model; c). exploring the size impact of unlabeled data. In addition, we note that we obtain leading results on two benchmark datasets after application of adversarial training.
Ii Related Work
Relation extraction is typically seen as a classification task and current state-of-art systems on relation extraction are usually based on deep neural networks. Among all the deep learning models, two highly related network architectures are: convolutional neural networks (CNN) and recurrent neural networks (RNN). Both CNN models and RNN models have achieved notable results on relation extraction tasks[10, 11, 12, 13, 14]. Recently, many variants of them are also proposed to improve the performance by capturing more relation expression information. Piecewise-CNN (PCNN) 
applies piecewise max pooling process after the convolutional operation to extract more structural features between the entities. Multi-channel CNN model in adds extra channel to capture the dependency information of the sentence syntactic structure, while the multi-channel CNN model in  integrates different versions of word embeddings to better represent the input words. Hua et al.  build a deep learning model based on shortest dependency path (SDP), which is considered to contain the most important information of the relation expression. A residual CNN model is proposed in  and achieves comparable performance with other deep learning models on protein protein interaction task. In this work, we will experiment with PCNN model to illustrate the effectiveness of our method.
Adversarial training is proposed by Goodfellow et al. 
to enable the model to classify both the original examples and adversarial examples on image classification task. They utilize adversarial examples to calculate an extra loss and add it on the original loss function to regularize the model. Before that, several machine learning methods, including deep neural networks, are found to be vulnerable to adversarial examples, which are generated by adding small adversarial perturbation on the input. This vulnerability indicates that the input-output mappings learned by deep neural networks are fairly discontinuous, which means the model will misclassify the examples after adding a small perturbation. In the NLP domain, Xie et al. 
derive a connection between input noising (random perturbation) in neural network language models and smoothing in n-gram models. Miyato et al.[4, 3] introduce adversarial training into text classification domain by applying the perturbation to the word embedding. Wu et al.  apply adversarial training in relation extraction using distantly supervised data from multi-instance multi-label framework and they only perturb the word embedding part of the input to improve the robustness of the model. While this work creates adversarial examples by perturbing word embedding, we extend the work by investigating perturbing other features of the input as well. Additionally, we consider the impact of adding multiple adversarial examples at one time. Sometimes, adversarial training could be confused with the concept of generative adversarial networks (GAN) , which is a mechanism for training a new generative model to simulate the distribution of original data.
In order to involve unlabeled data, Miyato et al. [4, 3] extend the adversarial training to virtual adversarial training by adding a local distributional smoothness regularization term on the model loss function. This method utilizes both the labeled and unlabeled data during training, so it could also be seen as a semi-supervised method . Semi-supervised learning has recently become more popular since vast quantities of unlabeled data could be collected with low cost. The most common way to utilize semi-supervised learning is to acquire labels for unlabeled data and involve them in the model training process .The self-training scheme  and the graph-based method [26, 27] belong to this kind of method. Recently, several novel methods have been proposed to involve unlabeled data for different tasks. Kingma et al. 
demonstrates that deep generative model and approximate Bayesian inference could provide improvement on image classification task. Graph neural network is also proven to be an effective semi-supervised method on classification problems. The virtual adversarial training method differs other semi-supervised by employing the adversarial training technique to help smooth the model and hence makes the model perform better .
In this section, we start with the definition of relation extraction and a set of notations. Then we describe the architecture of the deep neural network model and the input representation of the model. Next, we discuss our proposal to use multiple adversarial examples during training. The application of virtual adversarial training is introduced at the end of this section.
In this work, we detect the relation expression in a sentence. As is common, relation extraction is reduced to a binary classification problem given a sentence and the entity mentions. Hence, for a relation
, our model will predict the probabilitybased on two entities , within a sentence where represent the words in a sentence.
Iii-a PCNN Model Architecture
As mentioned before, our investigations involve the use of piecewise CNN (PCNN) 
. Like regular CNN models for classification problem, it contains four different layers: a). convolution layer(s); b). pooling layer(s); c). fully connected layer(s) and d). a softmax layer. The function of pooling layer(s) is to summarize the local features detected from the previous convolution layer(s) and then the summarized information will be used in fully connected layer(s) and softmax layer to classify each category.
PCNN model differs from the standard CNN model in its pooling operation. The pooling operation of PCNN model is applied piece-wise based on the position of the entities in the sentence, thus including more structural information. Specifically, a sentence is divided into three parts using two entities as the segment points, and pooling is operated on these three parts separately. Let us take this sentence ”We demonstrate that RBPROTEIN binds directly to hTAFII250PROTEIN in vitro and in vivo” as an example, we will do pooling on three parts: ”We demonstrate that RBPROTEIN”, ”binds directly to hTAFII250PROTEIN”, and ”in vitro and in vivo”. At last, we concatenate these three outputs obtained from the three separate pooling operations as the final output of pooling.
Fig. 1 shows the structure of the piecewise CNN model and we use different colors to illustrate three parts of pooling operation.
Iii-B Word Representation
As discussed below, for each word in a sentence in addition to the word embedding vector, we concatenate the POS tag, entity type, entity positional information and incoming dependency relation information to form its vector representation.
Word embedding is usually learned on large corpus to represent the word better. Hence our choice in this paper is the pre-trained word embedding on the PubMed using skip-gram model  and the dimension of word embedding vector is 200. We extract POS tag and incoming dependency information from the parse results of Bllip parser  and covert them to unique 10-dimension vectors. For entity positional information, we calculate the relative distance to entities (to entity 1 and entity 2). Specifically, we count the words between the target word and the entities and the distance will be marked as negative distance if a word appears at the left side of the entity. At last, we will map each distance number to unique 5-dimension vector. As for entity type, all the words in the sentence could fall into four categories: Entity1, Entity2, Entity, O, where Entity1 and Entity2 are the two interacting entities, other entities in the sentence are marked as Entity, and O stands for other words. We use one-hot vector to represent this feature.
Iii-C Adversarial Training
The idea behind adversarial training is that similar data instances should have same label, and deep neural network models should classify them in the same category. For each training instance, adversarial example is generated by adding a small perturbation on the original instance. Due to small perturbation, the new instance is seen as a similar instance and hence shares the label with the original instance.
Formally, let be the manually labeled dataset used in training and is the parameters of the current model. If an instance in has a label , then we will denote the generated adversarial example as , where
and the hyperparameterbounds the magnitude of the perturbation. In regular training, the loss on the entire labeled dataset is computed:
In adversarial training, the loss is computed as follows:
where mean the set of inputs and labels respectively and is usually set to 1.
At each training step, the adversarial perturbation will be calculated first based on the current model setting, and then the perturbed example (adversarial example) will be feed into training as it is shown in Fig. 2. An extra term is added on the original loss as is shown above, so adversarial training could also be seen as a regularization method from this perspective.
The optimization problem to calculate (as defined above) is an intractable problem for deep neural networks. Goodfellow et al.  propose a linear (first-order Taylor) approximation method to calculate :
, which is easy to compute through backpropagation algorithm. We will use this approximation method in this paper.
In the adversarial training, the training data are increased by inclusion of the adversarial examples. Since the human-labeled dataset cannot be augmented, we can only increase training data by including more adversarial examples. Thus, we propose to add multiple adversarial examples for each instance in . In this work, we will examine whether the addition of multiple adversarial examples can improve the generalization of model.
To generate extra adversarial examples, we just add even smaller random perturbations on the current adversarial example. Specifically, we generate a set of adversarial examples by randomly generating perturbations and adding them on current adversarial example:
where the magnitude of is much smaller than of , i.e., and .
In the case of multiple adversarial examples, the loss function of adversarial training is:
where is set to 1 as before.
Iii-D Virtual Adversarial Training
Semi-supervised learning is often applied when there is a small amount of labeled data but a large amount of raw (unlabeled) data are available. Note that in adversarial training, adversarial examples are generated only for instances in labeled dataset. Miyato et al.  propose semi-supervised inspired version of adversarial training called virtual adversarial training. We will now introduce this method in the relation extraction scenario.
Like before, let and represent the input of the model and the parameters of the model respectively. We are not assuming knowledge of label of ( may be from the unlabeled set). As before, we will generate a perturbation for . However, since the label of is not known, we use , the output distribution of predicted labels of the current model to compute the model loss.
As both labeled and unlabeled data will be included in this method, let denote the labeled dataset and represent the set of unlabeled data. The loss function is now defined as:
where , is the KL divergence of two distribution p and q, is the current parameter setting of the model and is the constraint of the perturbation magnitude. Hence, the loss function of virtual adversarial training is
where is also usually set to 1. Virtual adversarial training examples can also of course be generated for instances from labeled set as well (by ignoring the label provided in the original dataset).
Next we discuss an approximation method to compute the virtual adversarial perturbation to overcome the intractability of solving the maximization problem of . For simplicity, we denote by . We could utilize the same linear approximation as adversarial training before, however reaches its minimum value at , which means . Instead of using the first-order Taylor approximation, Miyato et al.  suggest the use of second-order Taylor approximation:
where is the Hessian matrix of . Under this approximation:
will be the dominant eigenvector ofwith magnitude since
is the dominant eigenvalue. However, it is computationally expensive to calculate the eigenvector and eigenvalue of a matrix. Miyato et al. utilize power iteration and finite difference method to approximate . Let us denote is a random unit vector that is not perpendicular to the dominant eigenvector, and repeat the power method times as follows:
where is a very small positive number and is normalization operation. Then will be good approximation of dominant eigenvector of , i.e. .
In this section, we will design experiments to evaluate the adversarial and virtual adversarial training method using human-labeled datasets from two tasks.
As discussed previously, we will introduce multiple adversarial examples in the adversarial training. Thus, the first set of experiments focus on adversarial training and the effect of adversarial example number. Specifically, we will see how the models perform by adding different number of adversarial examples in the training process. As more features are added as the input of the deep neural network, we also plan to explore the effect of adversarial perturbation not only on word embedding but also on extra features of the input.
Our next set of experiments will apply virtual adversarial training on relation extraction task. Since there is no theory to guide us how much unlabeled data we should use in the virtual adversarial training, we will also considers the size effect of unlabeled dataset in the virtual adversarial training setting. Specifically, we will evaluate the models built with different size of unlabeled data and find the appropriate setting in virtual adversarial training to guide the generation of unlabeled data.
Although we propose to use virtual adversarial training on relation extraction, we have not discussed anything about the unlabeled data. Specifically, we are interested in knowing whether the similarity of the unlabeled dataset to the labeled set might impact the results. While it is a hard question to know how similar these two datasets are, we investigate a simple situation where the two sets are the same or there are no examples in common. In the former case, will contain the same elements as the except that in , the instances are seen as unlabeled (by ignoring the labels). In discussing the results, we will use VAT* for this case and utilize VAT when is different with .
Iv-a Labeled Datasets
AIMed  and BioInfer  are two widely used benchmark dataset for PPI task, and LocText  is a recently available human-labeled corpus for PLOC task. We will use them as our labeled datasets, and Table I shows the statistics of these three datasets.
Iv-B Generation of Unlabeled Data
In order to generate unlabeled examples for our relation extraction tasks, we have to find a text source and a method to recognize all entity names in the text. The literature found in the IntAct database  is large enough as our text source for PPI. For entity names, we utilize the end-to-end system GNormPlus  to detect gene/protein names. For the subcellular location names, we use location names from UniProt  as a dictionary to match the mentions in the Medline text, which is the text source for PLOC task. Once we have all the entities recognized in the text, we could generate the unlabeled data by considering every possible combination of entity names in one sentence. In this way, we generate a large amount of unlabeled examples for PPI and PLOC (see Table I), but we only use part of those examples in our experiments.
Iv-C Experimental Setup
All our experiments are implemented in Tensorflow
and the input sentence length is set to 100 (longer sentences are pruned and the shorter sentences are padded with zeros). For the PCNN model, we utilize filter number of 400 and convolution window size of 3. Training epoch is set to 200 in all the experiments. In the adversarial training experiments, we use learning rate of 0.001 with 0.95 decay rate and 1000 decay steps, batch size of 128 and adversarial perturbation. For the semi-supervised learning experiment, learning rate of 0.001, batch size of 128 for labeled data and batch size of 128, 256 and 384 for unlabeled data are used in the training. For the adversarial perturbation in semi-supervised learning, we use the value of 2 for word embedding and 0.01 for the rest features. Since the purpose of this perturbation is used to promote the smoothness of the model, bigger value for word embedding is used than the one in adversarial training.
The magnitude (or norm) of word embedding is another thing we have to consider because it will affect the choice of perturbation magnitude. In order to reduce the influence of embedding norm, we normalize the pre-trained word embedding vector in adversarial training using a simple trick:
where is the maximum value in all pre-trained embedding vectors. In this way, we restrict the embedding norm to a relatively small range.
V Results and Discussion
|Precision||Recall||F score||Precision||Recall||F score||Precision||Recall||F score|
|PCNN is the model when we add POS tag, Entity type and dependency information besides the word embedding and positional information in the input; PCNN+ADVwe is the model when adversarial perturbation is only added on word embedding; PCNN+ADVMulti is the model when we use two adversarial examples in the training; PCNN+VAT* is the model when we involve unlabeled data from only labeled data (ignore the label) in the training; PCNN+VAT is the model when we involve unlabeled data (same size with labeled data) from in the training.|
We use precision, recall and F score as the measurement of model performance and 10-fold cross validation is performed in these experiments. In order to reduce the effect of random initialization of weights in the training of deep neural network, we run 3 times for each experiment and take the average of these 3 rounds as the final results.
V-a Adversarial Training
Our first experiment explores the effect of adversarial training where the input is perturbed. The first row of Table II provide the results for our baseline model, i.e., with the basic PCNN with no perturbation to the input. Since the text input is decided by the words and their order, we first perturb only the word embedding part of the input. Row ’PCNN+ADVwe’ of Table II gives the results showing improvement over the baseline for all three datasets.
V-B One vs Multiple Adversarial Examples
As we discussed before, adding multiple adversarial examples might help the model generalize better since this training technique covers more nearby points of the original training example. Also, the current adversarial example is calculated by an approximation method, involving more examples might compensate for the approximation loss.
In our experiments, we add two or three (i.e. , where M is the size of from previous section) adversarial examples for each instance of . Please note that rows ’PCNN’ and ’PCNN+ADVwe’ in Table II cover the case of and 1 respectively. Row ’PCNN+ADVmulti’ of the Table II shows the results for where we obtain the best results and this finding is the same for all three datasets. The results for are slightly worse than the results for and about the same as (see Fig. 3a). We believe this indicates that an appropriate number of adversarial examples could help improve the performance, but that increasing the number of adversarial examples further will introduce extra noise, negatively affecting the performance. More investigation is needed to shed light on how much to increase the number of adversarial example.
V-C Perturbation of Input
As is discussed previously, we first perturbed only the word embedding to modify the text input. Notice that the remaining features are all inferred from these words by taggers, named-entity recognition (NER) tools and parsers. However, it is natural to consider perturbing not only the word embedding but also the other features of the input. Compare to the performance of perturbing only the word embedding (row 2 of TableII), the results drop for the perturbation of all features: F score of 76.7, 87.0 and 79.1 for AIMed, BioInfer and LocText respectively. These numbers are slightly lower than the corresponding number of 76.9, 87.1, and 79.5 (row 2 of Table II) respectively. We speculate that since the length of vector to represent the other features are small, the perturbation on these features might significantly affect the relation representation in the input vector. This finding suggests that we need to be careful in deciding how the input should be perturbed when dealing with text.
|Precision||Recall||F score||Precision||Recall||F score|
|PCNNraw is the original model in  whose input only contains word embedding and entity positional information. BiLSTMour stands for our evaluation results on BiLSTM model in  using our parsed data on standard dataset. The results reported in  are based on dataset where the authors did not use the standard number of instances.|
V-D Virtual Adversarial Training
In the virtual adversarial training setting, we utilize both the labeled and unlabeled data to train the model. The addition of unlabeled data provides possible inputs for us to better smooth the model. The results are shown in the penultimate row (’PCNN+VAT’) of Table II.
Although we improve on the baseline model for AIMed and LocText, it is surprising that on BioInfer there is significant drop. We wonder whether this is due to a significant difference in the textual and distributional characteristics of the labeled and unlabeled text data. For this reason, we conducted another experiment where we wish to avoid this case. The simplest way to do this is to use the labeled data as the text for the unlabeled datasets, i.e. use the same text but drop the label. The results are shown in the last row (’PCNN+VAT*’) of Table II, and are slightly better than before. In particular, there is a significant improvement for BioInfer where once again we improve on the baseline.
In the virtual adversarial training method, there is no prescribed choice for the size of unlabeled data relative to the labeled data. In our experiment (previously described results), we used the same size for the two dataset, i.e. where . To investigate the possible impact of changing unlabeled data size, we conduct two more experiments with and . As is shown in Fig. 3b, it appears that the choice of is the best and that bigger unlabeled data size takes away from the training to learn labels, negatively affecting the model performance.
V-E Comparison with Other Systems
We have shown that adversarial and virtual adversarial training method improve the performance of PCNN model. We wish to place these results in a broader context of the results on these benchmark sets reported in the literature. To the best of our knowledge, there is no machine learning model trained on the LocText dataset for PLOC task except the work 
, but they utilize external knowledge base by applying transfer learning. Thus, TableIII only compare our method with previous machine learning models on PPI task.
As it is shown in Table III, our system utilizing PCNN model and adversarial training technique achieves the state-of-the-art performance on the standard AIMed and BioInfer datasets of PPI task.
In addition, we are aware that there are other deep learning models built for PPI task, but our methods are not comparable to them due to three reasons. a). different evaluation metric; DCNN model in, BiLSTM model in  and treeLSTM model in  employ macro F score to evaluate their model, which is usually used on multi-class classification problem. Furthermore, the unbalanced evaluation corpus (Positive:Negative=1:4.8 in AIMed and Positive:Negative=1:2.8 in BioInfer) will make the macro F score much higher than normal F score. b). non-standard evaluation set; For example, in, the authors delete nested entities interaction from original corpora. c). different cross validation method; The paper of McDepCNN model  utilizes document-level during evaluation, while the models in Table III use instance-level evaluation 111Document-level evaluation means the instances from the same document could appear in the training set or test set only (not appear these two sets at the same time). Instance-level evaluation does not have this restriction during the cross validation evaluation process..
V-F Applying AT/VAT with RNN
As shown in Table II that adversarial training is an effective way to boost PCNN model performance. In order to verify that our method generalize beyond CNN-based model, we also test our method on an RNN-based model. In particular, we repeat all our investigations on the BiLSTM model from , using the same input representation. We observe that adversarial and virtual adversarial training also improve the BiLSTM model performance. However, the BiLSTM model provides a better baseline, it does not achieve state-of-the-art performance. Due to space reasons, we do not provide the details for the BiLSTM model.
In this paper, we utilize adversarial training to alleviate the problem of insufficient training data of deep learning models and promote the model generalization. We apply this technique on relation extraction tasks and extend it with multiple adversarial examples in the training process. The experiment results illustrate that adversarial training could improve the model performance and one more extra adversarial example could further boost the performance.
We also apply adversarial training technique in semi-supervised learning (virtual adversarial training) to utilize unlabeled data that acquired with low cost. The performance shows improvement when only small amount of unlabeled data are used in the semi-supervised training, but drops when the large volume of unlabeled data are involved. In addition, the performance of virtual adversarial training method is not up to par with adversarial training method in supervised learning scenario.
In the future, we plan to explore the unlabeled data size effect on a larger scale to better guide its use. Also, We will continue to pursue better adversarial example generation technique to acquire more useful examples during the training to help the model generalization.
-  T. Young, D. Hazarika, S. Poria, and E. Cambria, “Recent trends in deep learning based natural language processing,” ieee Computational intelligenCe magazine, vol. 13, no. 3, pp. 55–75, 2018.
-  A. Kurakin, I. Goodfellow, and S. Bengio, “Adversarial machine learning at scale,” arXiv preprint arXiv:1611.01236, 2016.
-  T. Miyato, A. M. Dai, and I. Goodfellow, “Adversarial training methods for semi-supervised text classification,” arXiv preprint arXiv:1605.07725, 2016.
-  T. Miyato, S.-i. Maeda, M. Koyama, K. Nakae, and S. Ishii, “Distributional smoothing with virtual adversarial training,” arXiv preprint arXiv:1507.00677, 2015.
-  I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” arXiv preprint arXiv:1412.6572, 2014.
-  Y. Wu, D. Bamman, and S. Russell, “Adversarial training for relation extraction,” in Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, 2017, pp. 1778–1783.
-  M. Surdeanu, J. Tibshirani, R. Nallapati, and C. D. Manning, “Multi-instance multi-label learning for relation extraction,” in Proceedings of the 2012 joint conference on empirical methods in natural language processing and computational natural language learning. Association for Computational Linguistics, 2012, pp. 455–465.
-  M. Krallinger, F. Leitner, C. Rodriguez-Penagos, and A. Valencia, “Overview of the protein-protein interaction annotation extraction task of biocreative ii,” Genome biology, vol. 9, no. 2, p. S4, 2008.
-  J.-D. Kim, Y. Wang, T. Takagi, and A. Yonezawa, “Overview of genia event task in bionlp shared task 2011,” in Proceedings of the BioNLP Shared Task 2011 Workshop. Association for Computational Linguistics, 2011, pp. 7–15.
-  T. H. Nguyen and R. Grishman, “Relation extraction: Perspective from convolutional neural networks,” in Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing, 2015, pp. 39–48.
-  D. Zeng, K. Liu, S. Lai, G. Zhou, J. Zhao et al., “Relation classification via convolutional deep neural network,” 2014.
Y.-L. Hsieh, Y.-C. Chang, N.-W. Chang, and W.-L. Hsu, “Identifying protein-protein interactions in biomedical literature using recurrent neural networks with long short-term memory,” inProceedings of the eighth international joint conference on natural language processing (volume 2: short papers), vol. 2, 2017, pp. 240–245.
-  M. Miwa and M. Bansal, “End-to-end relation extraction using lstms on sequences and tree structures,” arXiv preprint arXiv:1601.00770, 2016.
-  P. Su, G. Li, C. Wu, and K. Vijay-Shanker, “Using distant supervision to augment manually annotated data for relation extraction,” BioRxiv, p. 626226, 2019.
-  D. Zeng, K. Liu, Y. Chen, and J. Zhao, “Distant supervision for relation extraction via piecewise convolutional neural networks,” in Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, 2015, pp. 1753–1762.
-  Y. Peng and Z. Lu, “Deep learning for extracting protein-protein interactions from biomedical literature,” arXiv preprint arXiv:1706.01556, 2017.
-  C. Quan, L. Hua, X. Sun, and W. Bai, “Multichannel convolutional neural network for biological relation extraction,” BioMed research international, vol. 2016, 2016.
-  L. Hua and C. Quan, “A shortest dependency path based convolutional neural network for protein-protein relation extraction,” BioMed research international, vol. 2016, 2016.
-  Y. Zhang and Z. Lu, “Exploring semi-supervised variational autoencoders for biomedical relation extraction,” arXiv preprint arXiv:1901.06103, 2019.
-  C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, “Intriguing properties of neural networks,” arXiv preprint arXiv:1312.6199, 2013.
-  Z. Xie, S. I. Wang, J. Li, D. Lévy, A. Nie, D. Jurafsky, and A. Y. Ng, “Data noising as smoothing in neural network language models,” arXiv preprint arXiv:1703.02573, 2017.
-  I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in neural information processing systems, 2014, pp. 2672–2680.
X. Zhu and A. B. Goldberg, “Introduction to semi-supervised learning,”
Synthesis lectures on artificial intelligence and machine learning, vol. 3, no. 1, pp. 1–130, 2009.
-  X. J. Zhu, “Semi-supervised learning literature survey,” University of Wisconsin-Madison Department of Computer Sciences, Tech. Rep., 2005.
-  C. Rosenberg, M. Hebert, and H. Schneiderman, “Semi-supervised self-training of object detection models,” 2005.
-  J. Chen, D. Ji, C. L. Tan, and Z. Niu, “Relation extraction using label propagation based semi-supervised learning,” in Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics. Association for Computational Linguistics, 2006, pp. 129–136.
-  L. Sterckx, T. Demeester, J. Deleu, and C. Develder, “Knowledge base population using semantic label propagation,” Knowledge-Based Systems, vol. 108, pp. 79–91, 2016.
-  D. P. Kingma, S. Mohamed, D. J. Rezende, and M. Welling, “Semi-supervised learning with deep generative models,” in Advances in neural information processing systems, 2014, pp. 3581–3589.
-  T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” arXiv preprint arXiv:1609.02907, 2016.
-  B. Chiu, G. Crichton, A. Korhonen, and S. Pyysalo, “How to train good word embeddings for biomedical nlp,” in Proceedings of the 15th Workshop on Biomedical Natural Language Processing, 2016, pp. 166–174.
-  D. McClosky, “Any domain parsing: automatic domain adaptation for natural language parsing,” 2010.
-  R. Bunescu, R. Ge, R. J. Kate, E. M. Marcotte, R. J. Mooney, A. K. Ramani, and Y. W. Wong, “Comparative experiments on learning information extractors for proteins and their interactions,” Artificial intelligence in medicine, vol. 33, no. 2, pp. 139–155, 2005.
-  S. Pyysalo, F. Ginter, J. Heimonen, J. Björne, J. Boberg, J. Järvinen, and T. Salakoski, “Bioinfer: a corpus for information extraction in the biomedical domain,” BMC bioinformatics, vol. 8, no. 1, p. 50, 2007.
-  J. M. Cejuela et al., “Loctext: relation extraction of protein localizations to assist database curation,” BMC bioinformatics, vol. 19, no. 1, p. 15, 2018.
-  S. Orchard et al., “The mintact project—intact as a common curation platform for 11 molecular interaction databases,” Nucleic acids research, vol. 42, no. D1, pp. D358–D363, 2013.
-  C.-H. Wei, H.-Y. Kao, and Z. Lu, “Gnormplus: an integrative approach for tagging genes, gene families, and protein domains,” BioMed research international, vol. 2015, 2015.
-  U. Consortium, “Uniprot: a worldwide hub of protein knowledge,” Nucleic acids research, vol. 47, no. D1, pp. D506–D515, 2018.
-  M. Abadi et al., “Tensorflow: Large-scale machine learning on heterogeneous distributed systems,” arXiv preprint arXiv:1603.04467, 2016.
-  L. Li, R. Guo, Z. Jiang, and D. Huang, “An approach to improve kernel-based protein–protein interaction extraction by learning from large-scale network data,” Methods, vol. 83, pp. 44–50, 2015.
-  G. Murugesan, S. Abdulkadhar, and J. Natarajan, “Distributed smoothed tree kernel for protein-protein interaction extraction from the biomedical literature,” PloS one, vol. 12, no. 11, p. e0187379, 2017.
-  S.-P. Choi, “Extraction of protein–protein interactions (ppis) from the literature by deep convolutional neural networks with various feature embeddings,” Journal of Information Science, vol. 44, no. 1, pp. 60–73, 2018.
-  S. Yadav, A. Ekbal, S. Saha, A. Kumar, and P. Bhattacharyya, “Feature assisted stacked attentive shortest dependency path based bi-lstm model for protein–protein interaction,” Knowledge-Based Systems, vol. 166, pp. 18–29, 2019.
-  M. Ahmed, J. Islam, M. R. Samee, and R. E. Mercer, “Identifying protein-protein interaction using tree lstm and structured attention,” in 2019 IEEE 13th International Conference on Semantic Computing (ICSC). IEEE, 2019, pp. 224–231.
-  H. Zhang, R. Guan, F. Zhou, Y. Liang, Z.-H. Zhan, L. Huang, and X. Feng, “Deep residual convolutional neural network for protein-protein interaction extraction,” IEEE Access, vol. 7, pp. 89 354–89 365, 2019.