1 Introduction
Relation Extraction (RE) is the process of generating structured relation knowledge from unstructured natural language texts. Traditional supervised methods [Zhou et al.2005, Bach and Badaskar2007] on small handlabeled corpora, such as MUC^{1}^{1}1http://www.itl.nist.gov/iaui/894.02/related projects/muc/ and ACE^{2}^{2}2http://www.itl.nist.gov/iad/mig/tests/ace/
, can achieve high precision and recall. However, as producing handlabeled corpora is laborius and expensive, the supervised approach can not satisfy the increasing demand of building largescale knowledge repositories with the explosion of Web texts. To address the lacking training data issue, we consider the distant
[Mintz et al.2009] or weak [Hoffmann et al.2011] supervision paradigm attractive, and we improve the effectiveness of the paradigm in this paper.The intuition of the paradigm is that one can take advantage of several knowledge bases, such as WordNet^{3}^{3}3http://wordnet.princeton.edu, Freebase^{4}^{4}4http://www.freebase.com and YAGO^{5}^{5}5http://www.mpiinf.mpg.de/yagonaga/yago, to automatically label free texts, like Wikipedia^{6}^{6}6http://www.wikipedia.org and New York Times corpora^{7}^{7}7http://catalog.ldc.upenn.edu/LDC2008T19
, based on some heuristic alignment assumptions. An example accounting for the basic but practical assumption is illustrated in Figure 1, in which we know that the two entities (
Barack Obama, U.S.) are not only involved in the relation instances^{8}^{8}8According to convention, we regard a structured triple as a relation instance which is composed of a pair of entities <>and a relation name with respect to them. coming from knowledge bases (Presidentof(Barack Obama, U.S.) and Bornin(Barack Obama, U.S.)), but also cooccur in several relation mentions^{9}^{9}9The sentences that contain the given entity pair are called relation mentions. appearing in free texts (Barack Obama is the 44th and current President of the U.S. and Barack Obama was born in Honolulu, Hawaii, U.S., etc.). We extract diverse textual features from all those relation mentionsand combine them into a rich feature vector labeled by the
relation names (Presidentof and Bornin) to produce a weak training corpus for relation classification.This paradigm is promising to generate largescale training corpora automatically. However, it comes up against three technical challeges:

Sparse features. As we cannot tell what kinds of features are effective in advance, we have to use NLP toolkits, such as Stanford CoreNLP^{10}^{10}10http://nlp.stanford.edu/downloads/corenlp.shtml, to extract a variety of textual features, e.g., named entity tags, partofspeech tags and lexicalized dependency paths. Unfortunately, most of them appear only once in the training corpus, and hence leading to very sparse features.

Noisy features
. Not all relation mentions express the corresponding relation instances. For example, the second relation mention in Figure 1 does not explicitly describe any relation instance, so features extracted from this sentence can be noisy. Such analogous cases commonly exist in feature extraction.

Incomplete labels. Similar to noisy features, the generated labels can be incomplete. For example, the fourth relation mention in Figure 1 should have been labeled by the relation Senateof. However, the incomplete knowledge base does not contain the corresponding relation instance (Senateof(Barack Obama, U.S.)). Therefore, the distant supervision paradigm may generate incomplete labeling corpora.
In essence, distantly supervised relation extraction is an incomplete multilabel classification task with sparse and noisy features.
In this paper, we formulate the relationextraction task from a novel perspective of using matrix completion with low rank criterion. To the best of our knowledge, we are the first to apply this technique on relation extraction with distant supervision. More specifically, as shown in Figure 2, we model the task with a sparse matrix whose rows present items (entity pairs) and columns contain noisy textual features and incomplete relation labels. In such a way, relation classification is transformed into a problem of completing the unknown labels for testing items in the sparse matrix that concatenates training and testing textual features with training labels, based on the assumption that the itembyfeature and itembylabel joint matrix is of low rank. The rationale of this assumption is that noisy features and incomplete labels are semantically correlated. The lowrank factorization of the sparse featurelabel matrix delivers the lowdimensional representation of decorrelation for features and labels.
We contribute two optimization models, DRMC^{11}^{11}11It is the abbreviation for Distant supervision for Relation extraction with Matrix Completion
b and DRMC1, aiming at exploiting the sparsity to recover the underlying lowrank matrix and to complete the unknown testing labels simultaneously. Moreover, the logistic cost function is integrated in our models to reduce the influence of noisy features and incomplete labels, due to that it is suitable for binary variables. We also modify the fixed point continuation (FPC) algorithm
[Ma et al.2011] to find the global optimum.Experiments on two widely used datasets demonstrate that our noisetolerant approaches outperform the baseline and the stateoftheart methods. Furthermore, we discuss the influence of feature sparsity, and our approaches consistently achieve better performance than compared methods under different sparsity degrees.
2 Related Work
The idea of distant supervision was firstly proposed in the field of bioinformatics [Craven and Kumlien1999]. Snow et al. Snow2004 used WordNet as the knowledge base to discover more hpyernym/hyponym relations between entities from news articles. However, either bioinformatic database or WordNet is maintained by a few experts, thus hardly kept uptodate.
As we are stepping into the big data era, the explosion of unstructured Web texts simulates us to build more powerful models that can automatically extract relation instances from largescale online natural language corpora without handlabeled annotation. Mintz et al. mintz2009distant adopted Freebase [Bollacker et al.2008, Bollacker et al.2007], a largescale crowdsourcing knowledge base online which contains billions of relation instances and thousands of relation names, to distantly supervise Wikipedia corpus. The basic alignment assumption of this work is that if a pair of entities participate in a relation, all sentences
that mention these entities are labeled by that relation name. Then we can extract a variety of textual features and learn a multiclass logistic regression classifier. Inspired by multiinstance learning
[Maron and LozanoPérez1998], Riedel et al. riedel2010modeling relaxed the strong assumption and replaced all sentences with at least one sentence. Hoffmann et al. hoffmannEtAl:2011:ACLHLT2011 pointed out that many entity pairs have more than one relation. They extended the multiinstance learning framework [Riedel et al.2010] to the multilabel circumstance. Surdeanu et al. Surdeanu2012 proposed a novel approach to multiinstance multilabel learning for relation extraction, which jointly modeled all the sentences in texts and all labels in knowledge bases for a given entity pair. Other literatures [Takamatsu et al.2012, Min et al.2013, Zhang et al.2013, Xu et al.2013] addressed more specific issues, like how to construct the negative class in learning or how to adopt more information, such as name entity tags, to improve the performance.Our work is more relevant to Riedel et al.’s riedelEtAl:2013:NAACLHLT which considered the task as a matrix factorization problem. Their approach is composed of several models, such as PCA [Collins et al.2001] and collaborative filtering [Koren2008]. However, they did not concern about the data noise brought by the basic assumption of distant supervision.
3 Model
We apply a new technique in the field of applied mathematics, i.e., lowrank matrix completion with convex optimization. The breakthrough work on this topic was made by Candès and Recht candes2009exact who proved that most lowrank matrices can be perfectly recovered from an incomplete set of entries. This promising theory has been successfully applied on many active research areas, such as computer vision
[Cabral et al.2011], recommender system [Rennie and Srebro2005] and system controlling [Fazel et al.2001]. Our models for relation extraction are based on the theoretic framework proposed by Goldberg et al. Goldberg2010, which formulated the multilabel transductive learning as a matrix completion problem. The new framework for classification enhances the robustness to data noise by penalizing different cost functions for features and labels.3.1 Formulation
Suppose that we have built a training corpus for relation classification with items (entity pairs), dimensional textual features, and labels (relations), based on the basic alignment assumption proposed by Mintz et al. mintz2009distant. Let and denote the feature matrix and the label matrix for training, respectively. The linear classifier we adopt aims to explicitly learn the weight matrix and the bias column vector
with the constraint of minimizing the loss function
,(1) 
where 1 is the allone column vector. Then we can predict the label matrix of testing items with respect to the feature matrix . Let
This linear classification problem can be transformed into completing the unobservable entries in by means of the observable entries in , and , based on the assumption that the rank of matrix is low. The model can be written as,
(2) 
where we use to represent the index set of observable feature entries in and , and to denote the index set of observable label entries in .
Formula (2) is usually impractical for real problems as the entries in the matrix are corrupted by noise. We thus define
where as the underlying lowrank matrix
and E is the error matrix
The rank function in Formula (2) is a nonconvex function that is difficult to be optimized. The surrogate of the function can be the convex nuclear norm [Candès and Recht2009], where is the 
largest singular value of
. To tolerate the noise entries in the error matrix , we minimize the cost functions and for features and labels respectively, rather than using the hard constraints in Formula (2).According to Formula (1), can be represented as instead of
, by explicitly modeling the bias vector
. Therefore, this convex optimization model is called DRMCb,(3) 
where and are the positive tradeoff weights. More specifically, we minimize the nuclear norm via employing the regularization terms, i.e., the cost functions and for features and labels.
If we implicitly model the bias vector , can be denoted by instead of , in which takes the role of in DRMCb. Then we derive another optimization model called DRMC1,
(4) 
where denotes the first column of .
For our relation classification task, both features and labels are binary. We assume that the actual entry belonging to the underlying matrix
is randomly generated via a sigmoid function
[Jordan1995]: , given the observed binary entry from the observed sparse matrix. Then, we can apply the loglikelihood cost function to measure the conditional probability and derive the
logistic cost function for and ,After completing the entries in , we adopt the sigmoid function to calculate the conditional probability of relation , given entity pair pertaining to in ,
Finally, we can achieve TopN predicted relation instances via ranking the values of .
4 Algorithm
The matrix rank minimization problem is NPhard. Therefore, Candés and Recht candes2009exact suggested to use a convex relaxation, the nuclear norm minimization instead. Then, Ma et al. Ma2011 proposed the fixed point continuation (FPC) algorithm which is fast and robust. Moreover, Goldfrab and Ma goldfarb2011convergence proved the convergence of the FPC algorithm for solving the nuclear norm minimization problem. We thus adopt and modify the algorithm aiming to find the optima for our noisetolerant models, i.e., Formulae (3) and (4).
4.1 Fixed point continuation for DRMCb
Algorithm 1 describes the modified FPC algorithm for solving DRMCb, which contains two steps for each iteration,
Gradient step: In this step, we infer the matrix gradient and bias vector gradient as follows,
and
We use the gradient descents and to gradually find the global minima of the cost function terms in Formula (3), where and are step sizes.
Shrinkage step: The goal of this step is to minimize the nuclear norm
in Formula (3). We perform the singular value decomposition (SVD)
[Golub and Kahan1965] for at first, and then cut down each singular value. During the iteration, any negative value in is assigned by zero, so that the rank of reconstructed matrix will be reduced, where .To accelerate the convergence, we use a continuation method to improve the speed. is initialized by a large value , thus resulting in the fast reduction of the rank at first. Then the convergence slows down as decreases while obeying . is the final value of , and is the decay parameter.
For the stopping criteria in inner iterations, we define the relative error to measure the residual of matrix between two successive iterations,
where is the convergence threshold.
4.2 Fixed point continuation for DRMC1
Algorithm 2 is similar to Algorithm 1 except for two differences. First, there is no bias vector b. Second, a projection step is added to enforce the first column of matrix Z to be 1. In addition, The matrix gradient for DRMC1 is
5 Experiments
Dataset  # of training tuples  # of testing tuples  % with more than one label  # of features  # of relation labels 

NYT’10  4,700  1,950  7.5%  244,903  51 
NYT’13  8,077  3,716  0%  1,957  51 
Model  NYT’10 (=2)  NYT’10 (=3)  NYT’10 (=4)  NYT’10 (=5)  NYT’13 

DRMCb  51.4 8.7 (51)  45.6 3.4 (46)  41.6 2.5 (43)  36.2 8.8(37)  84.6 19.0 (85) 
DRMC1  16.0 1.0 (16)  16.4 1.1(17)  16 1.4 (17)  16.8 1.5(17)  15.8 1.6 (16) 
In order to conduct reliable experiments, we adjust and estimate the parameters for our approaches, DRMCb and DRMC1, and compare them with other four kinds of landmark methods
[Mintz et al.2009, Hoffmann et al.2011, Surdeanu et al.2012, Riedel et al.2013] on two public datasets.5.1 Dataset
The two widely used datasets that we adopt are both automatically generated by aligning Freebase to New York Times corpora. The first dataset^{12}^{12}12http://iesl.cs.umass.edu/riedel/ecml/, NYT’10, was developed by Riedel et al. riedel2010modeling, and also used by Hoffmann et al. hoffmannEtAl:2011:ACLHLT2011 and Surdeanu et al. Surdeanu2012. Three kinds of features, namely, lexical, syntactic and named entity tag features, were extracted from relation mentions. The second dataset^{13}^{13}13http://iesl.cs.umass.edu/riedel/dataunivSchema/, NYT’13, was also released by Riedel et al. riedelEtAl:2013:NAACLHLT, in which they only regarded the lexicalized dependency path between two entities as features. Table 1 shows that the two datasets differ in some main attributes. More specifically, NYT’10 contains much higher dimensional features than NYT’13, whereas fewer training and testing items.
5.2 Parameter setting
In this part, we address the issue of setting parameters: the tradeoff weights and , the step sizes and , and the decay parameter .
We set to make the contribution of the cost function terms for feature and label matrices equal in Formulae (3) and (4). is assigned by a series of values obeying . We follow the suggestion in [Goldberg et al.2010] that starts at , and is the largest singular value of the matrix Z. We set . The final value of , namely , is equal to . Ma et al. Ma2011 revealed that as long as the nonnegative step sizes satisfy and , the FPC algorithm will guarantee to converge to a global optimum. Therefore, we set to satisfy the above constraints on both two datasets.
5.3 Rank estimation
Even though the FPC algorithm converges in iterative fashion, the value of varying with different datasets is difficult to be decided. In practice, we record the rank of matrix Z at each round of iteration until it converges at a rather small threshold . The reason is that we suppose the optimal lowrank representation of the matrix Z conveys the truly effective information about underlying semantic correlation between the features and the corresponding labels.
We use the fivefold cross validation on the validation set and evaluate the performance on each fold with different ranks. At each round of iteration, we gain a recovered matrix and average the F1^{14}^{14}14
scores from Top5 to Topall predicted relation instances to measure the performance. Figure 3 illustrates the curves of average F1 scores. After recording the rank associated with the highest F1 score on each fold, we compute the mean and the standard deviation to estimate the range of optimal rank for testing. Table 2 lists the range of optimal ranks for DRMCb and DRMC1 on NYT’10 and NYT’13.
On both two datasets, we observe an identical phenomenon that the performance gradually increases as the rank of the matrix declines before reaching the optimum. However, it sharply decreases if we continue reducing the optimal rank. An intuitive explanation is that the highrank matrix contains much noise and the model tends to be overfitting, whereas the matrix of excessively low rank is more likely to lose principal information and the model tends to be underfitting.
5.4 Method Comparison
Firstly, we conduct experiments to compare our approaches with Mintz09 [Mintz et al.2009], MultiR11 [Hoffmann et al.2011], MIML12 and MIMLatleastone12 [Surdeanu et al.2012] on NYT’10 dataset. Surdeanu et al. Surdeanu2012 released the open source code^{15}^{15}15http://nlp.stanford.edu/software/mimlre.shtml to reproduce the experimental results on those previous methods. Moreover, their programs can control the feature sparsity degree through a threshold which filters the features that appears less than times. They set in the original code by default. Therefore, we follow their settings and adopt the same way to filter the features. In this way, we guarantee the fair comparison for all methods. Figure 4 (a) shows that our approaches achieve the significant improvement on performance.
We also perform the experiments to compare our approaches with the stateoftheart NFE13^{16}^{16}16Readers may refer to the website, http://www.riedelcastro.org/uschema for the details of those methods. We bypass the description due to the limitation of space. [Riedel et al.2013] and its submethods (N13, F13 and NF13) on NYT’13 dataset. Figure 4 (b) illustrates that our approaches still outperform the stateoftheart methods.
TopN  NFE13  DRMCb  DRMC1 

Top100  62.9%  82.0%  80.0% 
Top200  57.1%  77.0%  80.0% 
Top500  37.2%  70.2%  77.0% 
Average  52.4%  76.4%  79.0% 
In practical applications, we also concern about the precision on TopN predicted relation instances. Therefore, We compare the precision of Top100s, Top200s and Top500s for DRMC1, DRMCb and the stateoftheart method NFE13 [Riedel et al.2013]. Table 3 shows that DRMCb and DRMC1 achieve 24.0% and 26.6% precision increments on average, respectively.
6 Discussion
We have mentioned that the basic alignment assumption of distant supervision [Mintz et al.2009] tends to generate noisy (noisy features and incomplete labels) and sparse (sparse features) data. In this section, we discuss how our approaches tackle these natural flaws.
Due to the noisy features and incomplete labels, the underlying lowrank data matrix with truly effective information tends to be corrupted and the rank of observed data matrix can be extremely high. Figure 5 demonstrates that the ranks of data matrices are approximately 2,000 for the initial optimization of DRMCb and DRMC1. However, those high ranks result in poor performance. As the ranks decline before approaching the optimum, the performance gradually improves, implying that our approaches filter the noise in data and keep the principal information for classification via recovering the underlying lowrank data matrix.
Furthermore, we discuss the influence of the feature sparsity for our approaches and the stateoftheart methods. We relax the feature filtering threshold () in Surdeanu et al.’s Surdeanu2012 open source program to generate more sparse features from NYT’10 dataset. Figure 6 shows that our approaches consistently outperform the baseline and the stateoftheart methods with diverse feature sparsity degrees. Table 2 also lists the range of optimal rank for DRMCb and DRMC1 with different . We observe that for each approach, the optimal range is relatively stable. In other words, for each approach, the amount of truly effective information about underlying semantic correlation keeps constant for the same dataset, which, to some extent, explains the reason why our approaches are robust to sparse features.
7 Conclusion and Future Work
In this paper, we contributed two noisetolerant optimization models^{17}^{17}17The source code can be downloaded from https://github.com/nlpgeek/DRMC/tree/master, DRMCb and DRMC1, for distantly supervised relation extraction task from a novel perspective. Our models are based on matrix completion with lowrank criterion. Experiments demonstrated that the lowrank representation of the featurelabel matrix can exploit the underlying semantic correlated information for relation classification and is effective to overcome the difficulties incurred by sparse and noisy features and incomplete labels, so that we achieved significant improvements on performance.
Our proposed models also leave open questions for distantly supervised relation extraction task. First, they can not process new coming testing items efficiently, as we have to reconstruct the data matrix containing not only the testing items but also all the training items for relation classification, and compute in iterative fashion again. Second, the volume of the datasets we adopt are relatively small. For the future work, we plan to improve our models so that they will be capable of incremental learning on largescale datasets [Chang2011].
Acknowledgments
This work is supported by National Program on Key Basic Research Project (973 Program) under Grant 2013CB329304, National Science Foundation of China (NSFC) under Grant No.61373075 and HTC Laboratory.
References
 [Bach and Badaskar2007] Nguyen Bach and Sameer Badaskar. 2007. A review of relation extraction. Literature review for Language and Statistics II.

[Bollacker et al.2007]
Kurt Bollacker, Robert Cook, and Patrick Tufts.
2007.
Freebase: A shared database of structured general human knowledge.
In
Proceedings of the national conference on Artificial Intelligence
, volume 22, page 1962. AAAI Press.  [Bollacker et al.2008] Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data, pages 1247–1250. ACM.
 [Cabral et al.2011] Ricardo S Cabral, Fernando Torre, João P Costeira, and Alexandre Bernardino. 2011. Matrix completion for multilabel image classification. In Advances in Neural Information Processing Systems, pages 190–198.
 [Candès and Recht2009] Emmanuel J Candès and Benjamin Recht. 2009. Exact matrix completion via convex optimization. Foundations of Computational mathematics, 9(6):717–772.
 [Chang2011] Edward Y Chang. 2011. Foundations of LargeScale Multimedia Information Management and Retrieval. Springer.

[Collins et al.2001]
Michael Collins, Sanjoy Dasgupta, and Robert E Schapire.
2001.
A generalization of principal components analysis to the exponential family.
In Advances in neural information processing systems, pages 617–624.  [Craven and Kumlien1999] Mark Craven and Johan Kumlien. 1999. Constructing biological knowledge bases by extracting information from text sources. In ISMB, volume 1999, pages 77–86.
 [Fazel et al.2001] Maryam Fazel, Haitham Hindi, and Stephen P Boyd. 2001. A rank minimization heuristic with application to minimum order system approximation. In American Control Conference, 2001. Proceedings of the 2001, volume 6, pages 4734–4739. IEEE.
 [Goldberg et al.2010] Andrew Goldberg, Ben Recht, Junming Xu, Robert Nowak, and Xiaojin Zhu. 2010. Transduction with matrix completion: Three birds with one stone. In Advances in neural information processing systems, pages 757–765.
 [Goldfarb and Ma2011] Donald Goldfarb and Shiqian Ma. 2011. Convergence of fixedpoint continuation algorithms for matrix rank minimization. Foundations of Computational Mathematics, 11(2):183–210.
 [Golub and Kahan1965] Gene Golub and William Kahan. 1965. Calculating the singular values and pseudoinverse of a matrix. Journal of the Society for Industrial & Applied Mathematics, Series B: Numerical Analysis, 2(2):205–224.
 [Hoffmann et al.2011] Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S. Weld. 2011. Knowledgebased weak supervision for information extraction of overlapping relations. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 541–550, Portland, Oregon, USA, June. Association for Computational Linguistics.

[Jordan1995]
Michael Jordan.
1995.
Why the logistic function? a tutorial discussion on probabilities and neural networks.
Computational Cognitive Science Technical Report.  [Koren2008] Yehuda Koren. 2008. Factorization meets the neighborhood: a multifaceted collaborative filtering model. In Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 426–434. ACM.
 [Ma et al.2011] Shiqian Ma, Donald Goldfarb, and Lifeng Chen. 2011. Fixed point and bregman iterative methods for matrix rank minimization. Mathematical Programming, 128(12):321–353.
 [Maron and LozanoPérez1998] Oded Maron and Tomás LozanoPérez. 1998. A framework for multipleinstance learning. Advances in neural information processing systems, pages 570–576.
 [Min et al.2013] Bonan Min, Ralph Grishman, Li Wan, Chang Wang, and David Gondek. 2013. Distant supervision for relation extraction with an incomplete knowledge base. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 777–782, Atlanta, Georgia, June. Association for Computational Linguistics.

[Mintz et al.2009]
Mike Mintz, Steven Bills, Rion Snow, and Dan Jurafsky.
2009.
Distant supervision for relation extraction without labeled data.
In
Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2Volume 2
, pages 1003–1011. Association for Computational Linguistics. 
[Rennie and Srebro2005]
Jasson DM Rennie and Nathan Srebro.
2005.
Fast maximum margin matrix factorization for collaborative
prediction.
In
Proceedings of the 22nd international conference on Machine learning
, pages 713–719. ACM.  [Riedel et al.2010] Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010. Modeling relations and their mentions without labeled text. In Machine Learning and Knowledge Discovery in Databases, pages 148–163. Springer.
 [Riedel et al.2013] Sebastian Riedel, Limin Yao, Andrew McCallum, and Benjamin M. Marlin. 2013. Relation extraction with matrix factorization and universal schemas. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 74–84, Atlanta, Georgia, June. Association for Computational Linguistics.
 [Snow et al.2004] Rion Snow, Daniel Jurafsky, and Andrew Y Ng. 2004. Learning syntactic patterns for automatic hypernym discovery. Advances in Neural Information Processing Systems 17.
 [Surdeanu et al.2012] Mihai Surdeanu, Julie Tibshirani, Ramesh Nallapati, and Christopher D Manning. 2012. Multiinstance multilabel learning for relation extraction. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 455–465. Association for Computational Linguistics.
 [Takamatsu et al.2012] Shingo Takamatsu, Issei Sato, and Hiroshi Nakagawa. 2012. Reducing wrong labels in distant supervision for relation extraction. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long PapersVolume 1, pages 721–729. Association for Computational Linguistics.
 [Xu et al.2013] Wei Xu, Raphael Hoffmann, Le Zhao, and Ralph Grishman. 2013. Filling knowledge base gaps for distant supervision of relation extraction. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 665–670, Sofia, Bulgaria, August. Association for Computational Linguistics.
 [Zhang et al.2013] Xingxing Zhang, Jianwen Zhang, Junyu Zeng, Jun Yan, Zheng Chen, and Zhifang Sui. 2013. Towards accurate distant supervision for relational facts extraction. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 810–815, Sofia, Bulgaria, August. Association for Computational Linguistics.
 [Zhou et al.2005] Guodong Zhou, Jian Su, Jie Zhang, and Min Zhang. 2005. Exploring various knowledge in relation extraction. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 427–434. Association for Computational Linguistics.
Comments
There are no comments yet.