Entity typing is the assignment of a semantic label to a span of text, where that span is usually a mention
of some entity in the real world. Named entity recognition (NER) is a canonical information extraction task, commonly considered a form of entity typing that assigns spans to one of a handful of types, such asPER, ORG, GPE
, and so on. Fine-grained entity typing (FET) seeks to classify spans into types according to more diverse, semantically richer ontologiesLing and Weld (2012); Yosef et al. (2012); Gillick et al. (2014); Del Corro et al. (2015); Choi et al. (2018), and has begun to be used in downstream models for entity linking Gupta et al. (2017); Raiman and Raiman (2018).
Consider the example in Figure 1 from the FET dataset, FIGER Ling and Weld (2012). The mention of interest, Hollywood Hills, will be typed with the single label LOC in traditional NER, but may be typed with a set of types /location, /geography, /geography/mountain under a fine-grained typing scheme. In these finer-grained typing schemes, types usually form a hierarchy: there are a set of coarse types that lies on the top level—these are similar to traditional NER types, e.g. /person; additionally, there are finer types that are subtypes of these top-level types, e.g. /person/artist or /person/doctor.
Most prior work concerning fine-grained entity typing has approached the problem as a multi-label classification problem: given an entity mention together with its context, the classifier seeks to output a set of types, where each type is a node in the hierarchy. Approaches to FET include hand-crafted sparse features to various neural architectures (Ren et al., 2016a; Shimaoka et al., 2017; Lin and Ji, 2019, inter alia, see section 2).
Perhaps owing to the historical transition from “flat” NER types, there has been relatively little work in FET that exploits ontological tree structure, where type labels satisfy the hierarchical property: a subtype is valid only if its parent supertype is also valid. We propose a novel method that takes the explicit ontology structure into account, by a multi-level learning to rank approach that ranks the candidate types conditioned on the given entity mention. Intuitively, coarser types are easier whereas finer types are harder to classify: we capture this intuition by allowing distinct margins at each level of the ranking model. Coupled with a novel coarse-to-fine decoder that searches on the type hierarchy, our approach guarantees that predictions do not violate the hierarchical property, and achieves state-of-the-art results according to multiple measures across various commonly used datasets.
2 Related Work
FET is usually studied as allowing for sentence-level context in making predictions, notably starting with Ling and Weld (2012) and Gillick et al. (2014), where they created the commonly used FIGER and OntoNotes datasets for FET. While researchers have considered the benefits of document-level Zhang et al. (2018), and corpus-level Yaghoobzadeh and Schütze (2015) context, here we focus on the sentence-level variant for best contrast to prior work.
Progress in FET has focused primarily on:
, the community has moved to distributed representationsYogatama et al. (2015), to pre-trained word embeddings with LSTMs Ren et al. (2016a, b); Shimaoka et al. (2016); Abhishek et al. (2017); Shimaoka et al. (2017) or CNNs Murty et al. (2018), with mention-to-context attention Zhang et al. (2018), then to employing pre-trained language models like ELMo Peters et al. (2018) to generate ever better representations Lin and Ji (2019). Our approach builds upon these developments and uses state-of-the-art mention encoders.
Incorporating the hierarchy: Most prior works approach the hierarchical typing problem as multi-label classification, without using information in the hierarchical structure, but there are a few exceptions. Ren et al. (2016a) proposed an adaptive margin for learning-to-rank so that similar types have a smaller margin; Xu and Barbosa (2018) proposed hierarchical loss normalization that penalizes output that violates the hierarchical property; and Murty et al. (2018) proposed to learn a subtyping relation to constrain the type embeddings in the type space. In contrast to these approaches, our coarse-to-fine decoding approach strictly guarantees that the output does not violate the hierarchical property, leading to better performance. HYENA (Yosef et al., 2012) applied ranking to sibling types in a type hierarchy, but the number of predicted positive types are trained separately with a meta-model, hence does not support neural end-to-end training.
Researchers have proposed alternative FET formulations whose types are not formed in a type hierarchy, in particular Ultra-fine entity typing Choi et al. (2018); Xiong et al. (2019); Onoe and Durrett (2019), with a very large set of types derived from phrases mined from a corpus. FET in KB Jin et al. (2019) labels mentions to types in a knowledge base with multiple relations, forming a type graph. Dai et al. (2019) augments the task with entity linking to KBs.
3 Problem Formulation
We denote a mention as a tuple , where is the sentential context and the span marks a mention of interest in sentence . That is, the mention of interest is . Given , a hierarchical entity typing model outputs a set of types in the type ontology , i.e. .
Type hierarchies take the form of a forest, where each tree is rooted by a top-level supertype (e.g. /person, /location, etc.). We add a dummy parent node entity = “/”, the supertype of all entity types, to all the top-level types, effectively transforming a type forest to a type tree. In Figure 2, we show 3 type ontologies associated with 3 different datasets (see subsection 5.1), with the dummy entity node augmented.
We now introduce some notation for referring to aspects of a type tree. The binary relation “type is a subtype of ” is denoted as .111 Per programming language literature, e.g. the type system that supports subtyping. The unique parent of a type in the type tree is denoted , where is undefined for . The immediate subtypes of (children nodes) are denoted . Siblings of , those sharing the same immediate parent, are denoted , where .
In the AIDA FET ontology (see Figure 2), the maximum depth of the tree is , and each mention can only be typed with at most 1 type from each level. We term this scenario single-path typing, since there can be only 1 path starting from the root (entity) of the type tree. This is in contrast multi-path typing, such as in the BBN dataset, where mentions may be labeled with multiple types on the same level of the tree.
Additionally, in AIDA, there are mentions labeled such as as /per/police/<unspecified>. In FIGER, we find instances with labeled type /person but not any further subtype. What does it mean when a mention is labeled with a partial type path, i.e., a type but none of the subtypes ? We consider two interpretations:
Exclusive: is of type , but is not of any type .
Undefined: is of type , but whether it is an instance of some is unknown.
We devise different strategies to deal with these two conditions. Under the exclusive case, we add a dummy other node to every intermediate branch node in the type tree. For any mention labeled with type but none of the subtypes , we add this additional label “/other” to the labels of (see Figure 2: AIDA). For example, if we interpret a partial type path /person in FIGER as exclusive, we add another type /person/other to that instance. Under the undefined case, we do not modify the labels in the dataset. We will see this can make a significant difference depending on the way a specific dataset is annotated.
4.1 Mention Representation
Hidden representations for entity mentions in sentence are generated by leveraging recent advances in language model pre-training, e.g. ELMo Peters et al. (2018).222 Lin and Ji (2019) found that ELMo performs better than BERT Devlin et al. (2019) for FET. Our internal experiments also confirm this finding. We hypothesize that this is due to the richer character-level information contained in lower-level ELMo representations that are useful for FET. The ELMo representation for each token is denoted as
. Dropout is applied with probability
to the ELMo vectors.
Our mention encoder largely follows Lin and Ji (2019)
Then we employ mention-to-context attention first described in Zhang et al. (2018) and later employed by Lin and Ji (2019): a context vector is generated by attending the sentence with a query vector derived from the mention vector . We use the multiplicative attention of LuongPM15:
The final representation for an entity mention is generated via concatenation of the mention and context vector: .
4.2 Type Scorer
We learn a type embedding for each type . To score an instance with representation , we pass it through a 2-layer feed-forward network that maps into the same space as the type space , with as the nonlinearity. The final score is an inner product between the transformed feature vector and the type embedding:
4.3 Hierarchical Learning-to-Rank
We introduce our novel hierarchical learning-to-rank loss that (1) allows for natural multi-label classification and (2) takes the hierarchical ontology into account.
We start with a multi-class hinge loss that ranks positive types above negative types Weston and Watkins (1999):
where . This is actually learning-to-rank with a ranking SVM Joachims (2002): the model learns to rank the positive types higher than those negative types , by imposing a margin between and : type should rank higher than by . Note that in Equation 5
, since it is a linear SVM, the margin hyperparametercould be just set as 1 (the type embeddings are linearly scalable), and we rely on regularization to constrain the type embeddings.
However, this method considers all candidate types to be flat instead of hierarchical — all types are given the same treatment without any prior on their relative position in the type hierarchy. Intuitively, coarser types (higher in the hierarchy) should be easier to determine (e.g. /person vs /location should be fairly easy for the model), but fine-grained types (e.g. /person/artist/singer) are harder.
We encode this intuition by (i) learning to rank types only on the same level in the type tree; (ii) setting different margin parameters for the ranking model with respect to different levels:
Here is the level of the type : for example, , and . In Equation 6, each positive type is only compared against its negative siblings , and the margin hyperparameter is set to be , i.e., a margin dependent on which level is in the tree. Intuitively, we should set since our model should be able to learn a larger margin between easier pairs: we show that this is superior than using a single margin in our experiments.
Analogous to the reasoning that in Equation 5 the margin can just be 1, only the relative ratios between ’s are important. For simplicity,333 We did hyperparameter search on these margin hyperparameters and found that Equation 7 generalized well. if the ontology has levels, we assign
For example, given an ontology with 3 levels, the margins per level are .
Equation 6 only ranks positive types higher than negative types so that all children types given a parent type are ranked based on their relevance to the entity mention. What should be the threshold between positive and negative types? We could set the threshold to be 0 (approaching the multi-label classification problem as a set of binary classification problem, see Lin and Ji (2019)), or tune an adaptive, type-specific threshold for each parent type Zhang et al. (2018). Here, we propose a simpler method.
We propose to directly use the parent node as the threshold. If a positive type is , we learn the following ranking relation:
where means “ranks higher than”. For example, a mention has gold type /person/artist/singer. Since the parent type /person/artist can be considered as a kind of prior for all types of artists, the model should learn that the positive type “singer” should have a higher confidence than “artist”, and in turn, higher than other types of artists like “author” or “actor”. Hence the ranker should learn that “a positive subtype should rank higher than its parent, and its parent should rank higher than its negative children.” Under this formulation, at decoding time, given parent type , a child subtype that scores higher than should be output as a positive label.
We translate the ranking relation in Equation 8 into a ranking loss that extends Equation 6. In Equation 6, there is an expected margin between positive types and negative types. Since we inserted the parent in the middle, we divide the margin into and : being the margin between positive types and the parent; and is the margin between the parent and the negative types. For a visualization see Figure 3.
The hyperparameter can be used to tune the precision-recall tradeoff when outputting types: the smaller , the smaller the expected margin there is between positive types and the parent. This intuitively increases precision but decreases recall (only very confident types can be output). Vice versa, increasing decreases precision but increase recall.
Therefore we learn 3 sets of ranking relations from Equation 8: (i) positive types should be scored above parent by ; (ii) parent should be scored above any negative sibling types by ; (iii) positive types should be scored above negative sibling types by . Our final hierarchical ranking loss is formulated as follows.
Predicting the types for each entity mention can be performed via iterative searching on the type tree, from the root entity node to coarser types, then to finer-grained types. This ensures that our output does not violate the hierarchical property, i.e., if a subtype is output, its parent must be output.
Given instance we compute the score for each type , the searching process starts with the root node entity of the type tree in the queue. For each type in the node, a child node (subtypes) is added to the predicted type set if , corresponding to the ranking relation in Equation 8 that the model has learned.
Here we only take the top- element to add to the queue to prevent from over-generating types. This can also be used to enforce the single-path property (setting = 1) if the dataset is single-path. For each level in the type hierarchy, we limit the branching factor (allowed children) to be . The algorithm is listed in Algorithm 1, where the function selects the top- elements from with respect to the function .
4.5 Subtyping Relation Constraint
Each type in the ontology is assigned a type embedding . We notice the binary subtyping relation on the types. Trouillon et al. (2016) proposed the relation embedding method ComplEx that works well with anti-symmetric and transitive relations such as subtyping. It has been employed in FET before — in Murty et al. (2018), ComplEx is added to the loss to regulate the type embeddings. ComplEx operates in the complex space — we use the natural isomorphism between real and complex spaces to map the type embedding into complex space (first half of the embedding vector as the real part, and the second half as the imaginary part):
We learn a single relation embedding for the subtyping relation. Given type and , the subtyping statement is modeled using the following scoring function:
where is element-wise product and is the complex conjugate of . If then ; and vice versa, if .
Given instance , for each positive type , we learn the following relations:
Translating these relation constraints as a binary classification problem (”is or is not a subtype”) under a primal SVM, we get a hinge loss:
This is different from Murty et al. (2018), where a binary cross-entropy loss on randomly sampled pairs is used. Our experiments showed that the loss in subsection 4.5 performs better than the cross-entropy version, due to the structure of the training pairs: we use siblings and siblings of parents as negative samples (these are types closer to the positive parent type), hence are training with more competitive negative samples.
4.6 Training and Validation
Our final loss is a combination of the hierarchical ranking loss and the subtyping relation constraint loss, with regularization:
The AdamW optimizer Loshchilov and Hutter (2019) is used to train the model, as it is shown to be superior than the original Adam under regularization. Hyperparameters (ratio of margin above/below threshold), (weight of subtyping relation constraint), and ( regularization coefficient) are tuned.
At validation time, we tune the maximum branching factors for each level
. These parameters tune the trade-off between the precision and recall for each layer and prevents over-generation (as we observed in some cases). All hyperparameters are tuned so that models achieve maximum microscores (see subsection 5.4).
The AIDA Phase 1 practice dataset for hierarchical entity typing comprises of 297 documents from LDC2019E04 / LDC2019E07, and the evaluation dataset is from LDC2019E42 / LDC2019E77. We take only the English part of the data, and use the practice dataset as train/dev, and the evaluation dataset as test. The practice dataset comprises of 3 domains, labeled as R103, R105, and R107. Since the evaluation dataset is out-of-domain, we use the smallest domain R105 as dev, and the remaining R103 and R107 as train.
The AIDA entity dataset has a 3-level ontology, termed type, subtype, and subsubtype. A mention can only have one label for each level, hence the dataset is single-path, thus the branching factors for the three layers are set to .
Weischedel and Brunstein (2005) labeled a portion of the one million word Penn Treebank corpus of Wall Street Journal texts (LDC95T7) using a two-level hierarchy, resulting in the BBN Pronoun Coreference and Entity Type Corpus. We follow the train/test split by Ren et al. (2016b), and follow the train/dev split by Zhang et al. (2018).
Ling and Weld (2012) sampled a dataset from Wikipdia articles and news reports. Entity mentions in these texts are mapped to a 113-type ontology derived from Freebase Bollacker et al. (2008). Again, we follow the data split by Shimaoka et al. (2017).
The statistics of these datasets and their accompanying ontologies are listed in Table 1.
To best compare to recent prior work, we follow Lin and Ji (2019) where the ELMo encodings of words are fixed and not updated. We use all 3 layers of ELMo output, so the initial embedding has dimension . We set the type embedding dimensionality to be . The initial learning rate is and the batch size is 256.
Hyperparameter choices are tuned on dev sets, and are listed in Table 1. We employ early stopping: choosing the model that yields the best micro score on dev sets.
We compare our approach to major prior work in FET that are capable of multi-path entity typing.444 Zhang et al. (2018) included document-level information in their best results—for fair comparison, we used their results without document context, as are reported in their ablation tests. For AIDA, since there are no prior work on this dataset to our knowledge, we also implemented multi-label classification as set of binary classifier models (similar to Lin and Ji (2019)) as a baseline, with our mention feature extractor. The results are shown in Table 2 as “Multi-label”.
We follow prior work and use strict accuracy (Acc), macro (MaF), and micro (MiF) scores. Given instance , we denote the gold type set as and the predicted type set . The strict accuracy is the ratio of instances where . Macro is the average of all scores between and for all instances, whereas micro counts total true positives, false negatives and false positives globally.
We also investigate per-level accuracies on AIDA. The accuracy on level is the ratio of instances whose predicted type set and gold type set are identical at level . If there is no type output at level , we append with other to create a dummy type at level : e.g. /person/other/other. Hence accuracy of the last level (in AIDA, level 3) is equal to the strict accuracy.
5.5 Results and Discussions
All our results are run under the two conditions regarding partial type paths: exclusive or undefined. The result of the AIDA dataset is shown in Table 2. Our model under the exclusive case outperforms a multi-label classification baseline over all metrics.
Of the 187 types specified in the AIDA ontology, the train/dev set only covers 93 types. The test set covers 85 types, of which 63 are seen types. We could perform zero-shot entity typing by initializing a type’s embedding using the type name (e.g. /fac/structure/plaza) together with its description (e.g. “An open urban public space, such as a city square”) as is designated in the data annotation manual. We leave this as future work.
Results for the BBN, OntoNotes, and FIGER can be found in Table 3. Across 3 datasets, our method produces the state-of-the-art performance on strict accuracy and micro scores, and state-of-the-art or comparable () performance on macro score, as compared to prior models, e.g. Lin and Ji (2019). Especially, our method improves upon the strict accuracy substantially (4%–8%) across these datasets, showing our decoder are better at outputting exact correct type sets.
Partial type paths: exclusive or undefined?
Interestingly, we found that for AIDA and FIGER, partial type paths should be better considered as exclusive, whereas for BBN and OntoNotes, considering them as undefined leads to better performance. We hypothesize that this comes from how the data is annotatated—the annotation manual may contain directives as whether to interpret partial type paths as exclusive or undefined, or the data may be non-exhaustively annotated, leading to undefined partial types. We advocate for careful investigation into partial type paths for future experiments and data curation.
We compare our best model with various components of our model removed, to study the gain from each component. From the best of these two settings (exclusive and undefined), we report the performance of (i) removing the subtyping constraint as is described in subsection 4.5; (ii) substituting the multi-level margins in Equation 7 with a “flat” margin, i.e., margins on all levels are set to be 1. These results are shown in Table 2 and Table 3 under our best results, and they show that both multi-level margins and subtyping relation constraints offer orthogonal improvements to our models.
We identify common patterns of errors, coupled with typical examples:
Confusing types: In BBN, our model outputs /gpe/city when the gold type is /location/region for “… in shipments from the Valley of either hardware or software goods.” These types are semantically similar, and our model failed to discriminate between these types.
Incomplete types: In FIGER, given instance “… multi-agency investigation headed by the U.S. Immigration and Customs Enforcement ’s homeland security investigations unit”, the gold types are /government_agency and /organization, but our model failed to output /organization.
Focusing on only parts of the mention: In AIDA, given instance “… suggested they were the work of Russian special forces assassins out to blacken the image of Kievs pro-Western authorities”, our model outputs /org/government whereas the gold type is /per/militarypersonnel. Our model focused on the “Russian special forces” part, but ignored the “assassins” part. Better mention representation is required to correct this, possibly by introducing type-aware mention representation—we leave this as future work.
We proposed (i)
a novel multi-level learning to rank loss function that operates on a type tree, and(ii) an accompanying coarse-to-fine decoder to fully embrace the ontological structure of the types for hierarchical entity typing. Our approach achieved state-of-the-art performance across various datasets, and made substantial improvement (4–8%) upon strict accuracy.
Additionally, we advocate for careful investigation into partial type paths: their interpretation relies on how the data is annotated, and in turn, influences typing performance.
This research benefited from support by the JHU Human Language Technology Center of Excellence (HLTCOE), and DARPA AIDA. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes. The views and conclusions contained in this publication are those of the authors and should not be interpreted as representing official policies or endorsements of DARPA or the U.S. Government.
- Fine-grained entity type classification by jointly learning representations and label embeddings. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017, Valencia, Spain, April 3-7, 2017, Volume 1: Long Papers, pp. 797–807. External Links: Cited by: 1st item, Table 3.
- Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of the ACM SIGMOD International Conference on Management of Data, SIGMOD 2008, Vancouver, BC, Canada, June 10-12, 2008, pp. 1247–1250. External Links: Cited by: §5.1.
- Ultra-fine entity typing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pp. 87–96. External Links: Cited by: §1, §2.
Improving fine-grained entity typing with entity linking.
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 6209–6214. External Links: Cited by: §2.
- FINET: context-aware fine-grained named entity typing. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015, pp. 868–878. External Links: Cited by: §1.
- BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pp. 4171–4186. External Links: Cited by: footnote 2.
AllenNLP: a deep semantic natural language processing platform.
Proceedings of Workshop for NLP Open Source Software (NLP-OSS), Melbourne, Australia, pp. 1–6. External Links: Cited by: §5.2.
- Context-dependent fine-grained entity type tagging. CoRR abs/1412.1820. External Links: Cited by: §1, 1st item, §2, §5.1.
- Entity linking via joint encoding of types, descriptions, and context. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pp. 2681–2690. External Links: Cited by: §1.
- OpenKE: an open toolkit for knowledge embedding. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP 2018: System Demonstrations, Brussels, Belgium, October 31 - November 4, 2018, pp. 139–144. External Links: Cited by: §5.2.
- Fine-grained entity typing via hierarchical multi graph convolutional networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 4968–4977. External Links: Cited by: §2.
- Optimizing search engines using clickthrough data. In Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, July 23-26, 2002, Edmonton, Alberta, Canada, pp. 133–142. External Links: Cited by: §4.3.
- An attentive fine-grained entity typing model with latent type representation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 6198–6203. External Links: Cited by: §1, 1st item, §4.1, §4.1, §4.3, §5.2, §5.3, §5.5, Table 3, footnote 2.
Fine-grained entity recognition.
Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence, July 22-26, 2012, Toronto, Ontario, Canada., pp. 94–100. External Links: Cited by: §1, §1, 1st item, §2, §5.1, Table 3.
- Decoupled weight decay regularization. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019, External Links: Cited by: §4.6.
- Hierarchical losses and new resources for fine-grained entity typing and linking. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pp. 97–109. External Links: Cited by: 1st item, 2nd item, §4.5, §4.5, Table 3.
- Learning to denoise distantly-labeled data for entity typing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pp. 2407–2417. External Links: Cited by: §2.
- Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pp. 2227–2237. External Links: Cited by: 1st item, §4.1.
- DeepType: multilingual entity linking by neural type system evolution. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pp. 5406–5413. External Links: Cited by: §1.
- AFET: automatic fine-grained entity typing by hierarchical partial-label embedding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pp. 1369–1378. External Links: Cited by: §1, 1st item, 2nd item, Table 3.
- Label noise reduction in entity typing by heterogeneous partial-label embedding. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, August 13-17, 2016, pp. 1825–1834. External Links: Cited by: 1st item, §5.1, Table 3.
- An attentive neural architecture for fine-grained entity type classification. In Proceedings of the 5th Workshop on Automated Knowledge Base Construction, AKBC@NAACL-HLT 2016, San Diego, CA, USA, June 17, 2016, pp. 69–74. External Links: Cited by: 1st item.
- Neural architectures for fine-grained entity type classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017, Valencia, Spain, April 3-7, 2017, Volume 1: Long Papers, pp. 1271–1280. External Links: Cited by: §1, 1st item, §5.1, §5.1, Table 3.
Complex embeddings for simple link prediction.
Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016, pp. 2071–2080. External Links: Cited by: §4.5.
- BBN pronoun coreference and entity type corpus. Philadelphia: Linguistic Data Consortium. External Links: Cited by: §5.1.
Support vector machines for multi-class pattern recognition.
ESANN 1999, 7th European Symposium on Artificial Neural Networks, Bruges, Belgium, April 21-23, 1999, Proceedings, pp. 219–224. External Links: Cited by: §4.3.
- Imposing label-relational inductive bias for extremely fine-grained entity typing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pp. 773–784. External Links: Cited by: §2.
- Neural fine-grained entity type classification with hierarchy-aware loss. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pp. 16–25. External Links: Cited by: 2nd item.
- Corpus-level fine-grained entity typing using contextual information. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015, pp. 715–725. External Links: Cited by: §2.
- Embedding methods for fine grained entity type classification. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, ACL 2015, July 26-31, 2015, Beijing, China, Volume 2: Short Papers, pp. 291–296. External Links: Cited by: 1st item.
- HYENA: hierarchical type classification for entity names. In COLING 2012, 24th International Conference on Computational Linguistics, Proceedings of the Conference: Posters, 8-15 December 2012, Mumbai, India, pp. 1361–1370. External Links: Cited by: §1, 2nd item.
- Fine-grained entity typing through increased discourse context and adaptive classification thresholds. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics, *SEM@NAACL-HLT 2018, New Orleans, Louisiana, USA, June 5-6, 2018, pp. 173–179. External Links: Cited by: 1st item, §2, §4.1, §4.3, §5.1, Table 3, footnote 4.