Merging datasets is a key operation for data analytics. A frequent requirement for merging is joining across columns that have different surface forms for the same entity. For instance, the name of a person might be represented as Douglas Adams, Douglas Noel Adams, D. Adams or Adams, Douglas. Similarly, ontology alignment can require recognizing distinct surface forms of the same entity, especially when ontologies are independently developed. This problem occurs for many entity types such as people’s names, company names, addresses, product descriptions, conference venues, or even people’s faces. Data management systems have however, largely focussed solely on equi-joins, where string or numeric equality determines which rows should be joined, because such joins are efficient.
We propose a different approach to joining different surface representations of the same entity, inspired by recent advances in deep learning. Our approach depends on (a) mapping surface forms into sets of vectors such that forms for the same entity are closest in vector space, (b) indexing these vectors to find the forms that can be potentially joined together. The approach is general, in the sense that once a model has been built for a specific semantic type (e.g. people, companies or faces) it can be used for joining any two datasets which share that semantic type. It is also efficient because indexing uses space partitioning algorithms (such as approximate nearest neighbor) to find surface forms that are potentially joinable, thus eliminating large parts of the vector space from consideration. Further, nearest neighbor algorithms have been applied to billions of vectors [Johnson, Douze, and Jégou2017], so the approach is practical for most datasets.
To test the feasibility of these ideas, we used Wikidata as ground truth to build models for datasets with 1.1M people’s names (about 200K identities) and 130K company names (70K identities). The problem of mapping vectors for the same entity closer in vector space than vectors for other entities is known in the literature as deep metric learning. Deep metric learning is known to be a difficult problem as studied in the space of face recognition and person-re-identification[Schroff, Kalenichenko, and Philbin2015], [Yang, Zhou, and Wang2018], [Chen et al.2018]. As a result, there is a significant amount of research on two aspects of training these networks: (a) how to choose samples for efficient learning, and (b) what constitutes a good loss function. In building models for entity names, we had to adapt these techniques for triplet selection and loss functions because matching entity names has different characteristics, as we describe below.
As in face recognition, our system for metric learning is built by training a so-called triplet based ‘siamese triplet’ network to learn to produce a small distance estimate for two surface forms for the same entity (between an arbitrarily chosenanchor and positive), and a large distance estimate for surface forms of different entities (an anchor and a negative). A key problem in effective training of such networks is the problem of how to select negative pairs for training, because one cannot exhaustively show all negative pairs to the network. In prior work for instance, this problem is solved by so-called ‘triplet mining’ where so-called ‘semi-hard’ negatives are gleaned after an all-pairs comparison of input vectors in a batch, e.g., [Schroff, Kalenichenko, and Philbin2015]. ‘Semi-hard’ negatives are negatives that are further away from the anchor than positives, but not by a sufficient margin. The idea of focusing on semi-hard negatives is to avoid mining noisy regions of the embedding space. However, for matching across entity names, most positive forms for the entity names are actually substantially further away from the anchor than are negatives, as we show empirically in this paper. In other words, ‘hard negatives’ dominate the space, and are the norm. Our approach to the problem of triplet mining was therefore to use an approximate nearest neighbors algorithm to choose negatives for training that were in the nearest neighbor set, which means that most of the time, our negatives are ‘hard negatives’, where the negative is closer to the anchor than positives. This approach has three key benefits. First, it lays out the entities based on their input vectors, and thus allows an efficient gathering of all nearest neighbors without a quadratic comparison of vectors in a batch to determine suitable negatives. Second, it provides a baseline against which one can objectively measure the effects of training. Third, it examines whether focusing on hard negatives is really detrimental to deep metric learning, at least for learning to map entity names in vector space. As we show empirically in this paper, this technique is better for building models that are suitable for use in joins than semi-hard triplet mining when the dataset is dominated by hard negatives. For easier datasets, the nearest neighbors approach was just as good as semi-hard training.
We also investigated the effect of using multiple local loss functions which have been proposed in the literature for improving deep metric learning, e.g., the triplet loss [Schroff, Kalenichenko, and Philbin2015], improved loss [Zhang, Gong, and Wang2016], and angular loss [Wang et al.2017] functions. For the problem of building models for datasets dominated by hard negatives, we found that an adaptation to the triplet loss function proposed in [Schroff, Kalenichenko, and Philbin2015] was better than three other functions that have been used in the literature. All code111https://github.com/yehudagale/fuzzyjoiner and models222https://drive.google.com/open?id=1zivCTGkq2_AkfjGLHMnlehzTmYUwcQ9e for this paper are available.
Extensions in data management systems for handling joins typically use string similarity algorithms such as edit-distance, Jaro-Winkler and TF-IDF; e.g., [Cohen, Ravikumar, and Fienberg2003]. String matching algorithms often do not work for merging different forms of the same entity because valid transformations of entity names can yield very different strings. More recently, data driven approaches have emerged as a powerful alternative for merging data. Data driven approaches mine patterns to determine the ‘rules’ for joining a given entity type. One example of such an approach is illustrated in [He, Ganjam, and Chu2015], which determines which cell values should be joined based on whether those cell values co-occur on the same row across disparate tables in a very large corpus of data. Another example is work by [Zhu, He, and Chaudhuri2017] where program synthesis techniques are used to learn the right set of transformations needed to perform the entity matching operation. Our approach for merging datasets is much more general than either approach because the mapping function generalizes the set of transformations that are allowed across surface forms of an entity, even if they cannot be directly expressed as program transforms.
The idea of building joint embeddings for merging datasets followed by nearest neighbors search has been applied recently to the problem of linking relational tuple embeddings with embeddings of other relational tuples or unstructured text [Neves and Bordawekar2018], [Kilias et al.2018]). For the problem of linking tuples, each model that is learnt is specific to the database it was trained on. Our focus is on techniques that can be used to develop general purpose embedding models for merging alternate surface forms of key entities. Once such models are built, they can be applied to joining any two datasets that share that semantic type.
Metric learning is a well studied problem in the face recognition literature, e.g., [Schroff, Kalenichenko, and Philbin2015], with a rich literature in triplet mining techniques. The closest approach to ours is the use of nearest neighbors algorithms for semi-hard triplet mining [Kumar et al.2017]. For semi-hard triplet mining, one cannot look at fixed neighborhood sizes in building triplets. If all the positives are further away from the anchor than the negatives in a given neighborhood size of , it means that needs to be expanded until a neighborhood size is found that has the right characteristics. Our approach in including ‘hard negatives’ means we can use a fixed to generate samples. An additional benefit is that at least for certain types of datasets, we show that metric learning with hard negatives is more effective than semi-hard mining.
The study of loss function effectiveness in metric learning is also a rich literature, with two basic types of loss functions that have been proposed: (a) local loss functions such as triplet loss [Schroff, Kalenichenko, and Philbin2015], angular loss [Zhang, Gong, and Wang2016] and improved loss [Wang et al.2017], and (b) loss functions that operate on a more global level across a batch of training examples [Sohn2016], [Song et al.2016], [Song et al.2017]. Since our triplet selection is global, rather than batch based, we did not see the value of using global loss functions.
These three input character embedding vectors were fed to three identical networks that share the same weights. Weight sharing ensures that the networks learn the same mapping function. In our implementation, each of the three networks had 4 stacked layers of 128 unit Gated Recurrent Units (or GRUs) to capture the sequential nature of the input. GRUs are a type of recurrent network[Cho et al.2014] where each hidden unit updates its weights at a specific step in the sequence based on the current input and the value of the hidden unit from the prior step
. For name and textual data, positional information is critical, so we used GRUs instead of the convolutional neural networks (CNNs) that have been traditionally used in metric learning for face and image recognition.
The output of the last layer shown in the Figure 1 as a dark layer is the vector embedding for the inputs. These are fed to two layers which compute a euclidean distance between the anchor and the positive (positive distance), and the anchor and the negative (negative distance). Conceptually, there are two objectives in metric learning, one to minimize positive distances, and the other to maximize negative distances. As described below, this dual objective can be achieved by different loss functions. We restrict ourselves to a discussion of the some of the more popular loss functions that are local in nature (i.e., they only look at a single triple).
Let represent an embedding for an entity name, and , , reflect the vector embeddings of the anchor, positive and negative, respectively. We investigated four different loss functions, three of which have been used in prior face recognition literature to explore their effectiveness for the entity metric learning problem.
For face recognition, Schroff et al. [Schroff, Kalenichenko, and Philbin2015] propose a triplet loss function where the positive distances for each triplet in the set of triplets is separated from negative distances by a margin of , as shown in Figure 1(a) with the arrow pushing toward . For each of triples, reflects the loss for a given triple as follows:
and the loss function that is minimized across all triples is given by
Note that in this formulation, it is assumed that embedding is normalized so because this normalization is robust across variations in illumination and contrast. The value of in the original work is a hyper-parameter that [Schroff, Kalenichenko, and Philbin2015] was set to 0.2.
An improvement over the triplet loss function is proposed by [Schroff, Kalenichenko, and Philbin2015] for the recognition of faces in videos [Zhang, Gong, and Wang2016]. Conceptually, this function that we refer to as ‘improved loss’ in the paper considers the distances from the positive to the negative, and tries to push that difference toward as well as the distance from anchor to the negative. We show this in Figure 5(b) with two push arrows. In addition, it corrects the fact the original triplet loss function has no constraints on how close the positive distance should be. For instance, it is possible for the anchor and positive form a large cluster with a large intra-class distance. The equations that achieve these constraints are described below. Equation 4 tries to reduce intra-class distance by ensuring it is less than . Equation 5 tries to maximize inter-class distance by ensuring that the distance from the anchor and positive to the negative are both taken into account. Equation 6 balances inter-class constraints with intra-class constraints with the parameter .
Wang et al. [Wang et al.2017] define a novel angular loss function which is not based on pairwise distances, but rather is based on the angles of the triangle formed by the anchor, positive and the negative triplet. Conceptually, they point out that since the anchor and the positive pairs belong to the same class, the angle formed by the anchor, negative, and positive elements should be as small as possible within that triangle. Their loss function is an attempt to minimize this angle, as defined in the equations below. The rough idea is that moving the positive nearer and the negative further away each reduce the angle, which we illustrate in Figure 1(c) with the angle to shrink.
The inspiration for adapting the original triplet loss is to separate the effect of the positive and negative distances as much as possible. Rather than subtracting the negative distance from the positive one, we want to negate the negative distance and then add the two. To approximate this, we subtract the negative distance from a margin, and use 0 instead if that difference is negative. We then combine the squared distances as usual, as shown in Figure 9. Note that we did not normalize because learning was worse with normalization.
As discussed earlier, deep metric learning for joins is a difficult problem for neural networks to learn because it requires that the discrimination of each anchor from all the other hard negatives. We describe a popular approach batch based approach from the face recognition literature first to contrast it with our mechanism for triplet selection.
Batch Based Triple Selection
In Schroff et al.’s work [Schroff, Kalenichenko, and Philbin2015] they described a mini-batch based triplet selection mechanism for training that has dominated the literature. Conceptually, sampling the right triplets for fast network learning requires sampling a set of hard positives and a set of hard negatives, where a hard positive is defined as , where ranges over all and a hard negative is defined as , where ranges over all .
However, it is infeasible to compute these values for the entire dataset. Calculating hard positives is easy because the number of positives is small normally. Calculating hard negatives is not possible for all but small datasets. As a result, the triplets can be generated by a mechanism illustrated in Figure 3. Instead of focusing on finding hard positives, they instead pair every possible positive in the sample shown in the right panel in the figure with selected negatives, since the set of positives is usually quite small. Furthermore, for negative examples, they try to select semi-hard negatives instead of hard negatives, where a semi-hard negative has the property that but by a margin smaller than , as shown in figure 3.
Metric Based Triplet Selection
For the problem of joins, we ideally want the anchor and all of its positives to be clustered closest together and separated from the nearest negatives as clearly as possible. Approximate nearest neighbor (ANN) indexes are highly efficient methods for selecting the top-K neighbors of a given vector by Euclidean distance, cosine similarity or other distance metrics. They are based on space partitioning algorithms, such ask-d trees, where the vector space is iteratively bisected into two disjoint regions. The average complexity to query the vectors of a neighbor is where N is the number of vectors in the dataset. Implementations exist now for fast, memory-efficient ANN indexes that scale up to a billion vectors [Johnson, Douze, and Jégou2017] using techniques to compress vectors in memory efficiently. In our work, we used the Annoy ANN implementation333https://github.com/spotify/annoy in our work which is based on the refinements to k-d trees ideas described in [Muja and Lowe2014].
Assuming one has the entire dataset indexed in an ANN index, the problem of triplet selection can be simplified by asking the ANN index for the top k-nearest neighbors of an anchor, where k is given by the number of triplets that one desires to generate for each anchor. As in earlier work, selection of hard positives is not relevant because all positive data should be used to teach the network the right function. Selection of negatives is the set of all nearest neighbors that are negatives. There is no explicit attempt to filter out hard negatives in the approach. The overall idea is that the set of negatives that appear in the nearest neighbor set at input are in fact the most important elements for the network to learn to discriminate from positives for a join. Focusing on these elements, regardless of whether they are hard or semi-hard should lead to better discrimination for joins. An ANN-based strategy also provides an important baseline to assess what if any learning was performed by the neural network in mapping input vectors to a different space.
Applying deep learning models to joins
Assuming we have deep learning models that are trained to produce the right distance estimates for alternate surface forms for an entity, the models can be used for a join as follows. For each cell value in the two columns to be joined, obtain vector embeddings from the last layer of the network. Note that although the siamese network has three separate networks, each network is in fact identical to the other two networks because they share weights. For the left column cell values, vector embeddings are inserted into an approximate nearest neighbors index. For each cell value in the right column, vector embeddings are used as ‘query vectors’ to query the approximate nearest neighbors index. In our context, merging the datasets would involve joining the top rows in the left table that are ‘closest’ in distance to each cell value in the right table. Note that the choice of
clearly has a direct effect on the tradeoff between precision and recall, but for most practical uses of join,is usually very small (typically 1). This has implications on what metrics we can use to evaluate joins, as we describe in our evaluation.
Our benchmarks were derived from Wikidata. Specifically, we used the also known as property from Wikidata to get alternate forms for the same entity name for people as well as companies. For company names, we augmented the names and surface forms in Wikidata with data from the SEC444https://www.sec.gov/dera/data/financial-statement-and-notes-data-set.html, which has former and more recent names for companies. There were 213,106 names for people from the specific dump we extracted, and 70,946 names for companies. The extracted files and the cleansing code are available on our repository. The extracted files however contained significant noise that we cleaned up programmatically. We describe the cleansing rules for people and for companies separately because they were somewhat different. In the case of people’s names, we also augmented the data so the system could learn some common rules that define variants of a person’s name. This was not possible for company names.
Cleansing people’s names
Wikidata has a number of historical figures which are not really names of people (e.g. Queen Elizabeth, Pope Leo). If we detected a title in the name referring to royalty or qualifiers or Roman numerals which strongly indicated royalty, we dropped the name. We also removed punctuations such as ’…’, and anything that was placed in parenthesese because they were not part of the name (e.g. a qualification such as the son of Jacob might appear in parentheses after a name). Although we got the extract for English, there were frequently names in Chinese, Korean, Cyrillic, etc. We removed these and restricted ourselves to names in ISO-8859-1 unicode. All punctuations such as ’,’, ’-’, ’.’ etc were retained for names because they are strong indicators of how a name needs to be parsed. People in wikidata have the main name for the entity, with aliases for the person specified in a different property. We made sure that every alternate form had at least one name part in common with the main name to rule out ’nicknames’ (e.g. ‘Father of the Nation’ for George Washington). We also dropped cases where the last name of a person was different (usually because a woman’s name changed after marriage).
Cleansing company names
As with people’s names, we removed any text in parentheses because it usually was a qualification (e.g., IBM (company)). We restricted ourselves to unicode ISO-8859-1. The SEC data had a lot of strange company names that could be described with the regex pattern T[0-9]+ or [0-9]+, and we dropped these. We tried to ensure we included names that shared some subset of characters with the main name, ensuring we would not drop acronyms when possible. The check for acronyms tested if any of the initial letters of each name part occurred in the name.
Augmenting people’s names
In many cases, we had no alternate forms for a person’s name even if we did have their main name. We augmented the data with the following rules. If the main name for the person in Wikidata had two parts, we created new source forms as follows: (a) Last Name , First Name, (b) First Name Initial . Last Name, and (c) Last Name , First Name Initial.
If the main name in Wikidata had three parts, we created these additional source forms in addition to the ones listed above: (a) First Name Middle Name Initial Last Name (b) Last Name , First Name Middle Name (c) Last Name , First Name Middle Initial .
After cleansing, if a name had no alternate surface forms, they were dropped. This resulted in 40,555 company names with an average of 2.2 names each, and 195,422 people’s names with an average of 4.69 names each. Using the triplet selection algorithm we created a set of 10.9 million and 1.04 million triplets for people and companies at training. The data was then split with a 60-20-20 ratio to provide training, validation and test data respectively. Each run was conducted with a different random split to ensure generalization of results.
As a baseline, we measured how the anchors, positives and negatives were laid out in vector space based on character embeddings alone. This gives us a measure of how difficult the problem is for the neural network to learn. We indexed all the vector embeddings (regardless of test or train) into a nearest neighbors algorithm using the Spotify ANNOY555https://github.com/spotify/annoy implementation. We then queried the index for the nearest neighbors of this set, varying so it was either 20, 100, 500, or 1500 neighbors. Our primary interest was in recall rates of positives prior to any training, to establish the baseline prior to training. We also measured the nature of hard negatives as we varied the neighborhoods; i.e., what is the mean distance of negatives from the anchor while we increased neighborhoods, compared to the positive distances. Figure 5
shows the results for the people data. First, recall rate of positives in the nearest neighbor set was very low at 3%, and it increased only to 6% at a neighborhood size of 1500. The difficulty of the problem for reconciling people’s names is further highlighted by the distance data. The mean distance of positives from anchor was 9.05, with a standard deviation of 3.08. The mean distances of negatives from anchor was 2.73, with a standard deviation of 0.99 forof 20. However even for of 1500, the mean negative distance was 3.64, well below the mean positive distance of 9.05. The company data show a similar trend, although companies seems to be an easier problem than reconciling people data, as shown in Figure 6. Recall rate for companies starts at 16% for a neighborhood size of 20, and is up to 25% for a neighborhood size of 1500. Mean negative distance is at 3.05 compared to 4.64 at a neighborhood size of 20, with a standard deviation of 1.44. At a neighborhood size of 1500, mean negative sizes are slightly higher at 3.86.
In building the models, we employed early stopping using the usual metric of accuracy of the validation data, and we performed hyper-parameter tuning using grid search varying margins for the adapted loss function from 1-20. Accuracy was defined as the percentage of validation triples where positive distances from anchor were less than negative distances from anchor. For all our runs, test accuracy as measured by this metric ranged in the 97-99% range for the people dataset and 92-94% companies dataset for all losses except angular loss which was poor throughout. Table 1 shows the results for the people and company datasets, run with a fixed of 20 neighbors. Because we compared across different losses, the results are categorized by each loss function for each dataset we tested. For all the results reported here, we ran multiple runs because of the stochastic nature of neural network models; the results here are means across two runs.
We report multiple metrics to measure the effectiveness of training, some of which are not standard because of the experimental setup, so we define them below:
Recall. Recall is measured by the percentage of positives in the nearest neighbor set of each anchor.
Precision@1. Precision@1 is measured by the percentage of anchors with the very nearest neighbor being a positive. As pointed out earlier, this is an important metric for assessing join performance in a majority of cases.
Precision. Precision is measured by the fractions of all positives for each anchor that were closer to that anchor than was any negative.
Performance for Joins
For fuzzy joins, we need both precision and recall to be high. Without good recall, a join will potentially miss names that should be joined. Without high precision, a join will mistakenly join many inappropriate names. The numbers for the adapted loss function were 81% for people and 74% for companies, so a join could capture most similar names. For people, the adapted loss function showed overall better performance than triplet loss (see Table 1). Furthermore, it appears that triplet loss outperformed improved loss, which in turn is better than angular loss for learning this problem. On the other hand, for companies the only significant difference was that angular was worse.
Precision is a little trickier to define, but one way is to measure it is to examine how many true matches we get in the neighborhood of each anchor before seeing a single mistake, i.e. a match that should not be there. That metric gives a picture of how many names would be correctly found, on average, by a join. Adapted loss is at 63% by this metric for people and 66% for companies, so about two thirds of recalled names would be found before finding a single error. Since every name has at least one other name for the same entity, we also measure precision at 1, which is the probability that the very nearest neighbor is a true match. For that, the very best performance we get is 84% for people and 75% for companies.
We also assessed our learning mechanism more directly by examining how the nearest neighborhood changed from before to after training. As can be seen from Table 2, recall is improved greatly, with the bigger change for people, in which case it improves from 3% to 81%. For companies, it is from 16% to 74%. Thus training is clearly effective in moving actual names for the same entity into the nearest neighbors. For precision, the fraction of true matches found before the first error improves from 16% to 73% for people and 16% to 64% for companies. Precision@1 improves from 10% to 81% for people and from 26% to 75% for companies. In both these cases, demonstrable training occurred.
Comparison with Semi-Hard Negatives
We hypothesized that training against hard negatives can potentially benefit on datasets with characteristics like those of our people dataset, when compared to training against semi-hard negatives. Such datasets have large numbers of hard negatives, as suggested by Figure 5, which shows positive distances higher on average than negative ones.
We therefore compared the hard negative triplet selection mechanism directly to training against semi-hard negatives. To compute semi-hard negatives, we took, for each entity, all its positives, and found all negatives in the nearest neighborhood of that positive that were further from the entity than is the positive. We made triplets for each such pair of positive and negative. We thus chose the hardest semi-hard triplets: the negatives are as close to the positive as possible while still being further from the entity. We ran experiments again for adapted loss on our people dataset, changing only the triplets used; these results are in Table 3. The same comparison on the company dataset showed no difference, mostly because company data seems easier and seems more immune to differences in loss functions or training regimens.
The results show training on hard negatives produces consistently better results, both for precision and recall. Hard negative training resulted in recall of 81% of positives in the nearest set versus 63% for semi-hard negatives. Precision@1 is similar, with 81% for hard negatives versus 61% for semi-hard. Overall precision, defined as the fraction of positives closer than any negative is 63% for hard negatives versus 43% for semi-hard.
Generalization Test on Faces
We have demonstrated that our strategy for joins could work well for textual names of people and companies, but the technique could potentially work for any kind of data for which a vector embedding can be made. To test how well that works, we evaluated two existing models for face recognition that were trained with the same approaches defined in [Schroff, Kalenichenko, and Philbin2015] on two different datasets, VGGFace2 [Cao et al.2018], and CasiaWebFace [Yi et al.2014]. We took the two open sourced models666https://github.com/davidsandberg/facenet, and extracted the output embeddings for faces from the LFW test set [Huang et al.2012] using these embeddings. We put these output embeddings into an ANN structure, and computed our metrics on that. Note that for face datasets there is a greater number of variability of faces per identity with a maximum of 529 faces for a single identity. We adjusted neighborhood length to a maximum of 20 or the number of expected neighbors in our experiments for each face. Recall was 91% on the VGGFace2 dataset, and 87% for the Casia Web Face dataset. Precision@1 was 95% for VGGFace2 and 93% for Casia Web faces. Overall precision was 91% for VGGFace2 and Casia Web faces was 87%. These results suggest that existing models can in fact be re-purposed for joins. We point out that the face models seem better in terms of performance compared to our model for names but there are significant differences in the data characteristics for training. The names data had a lot fewer positives per identity. VGGFace2 has 362.6 faces per identity, and in CasiaWebFace, 500,000 images exist for 10,000 identities.
Conclusion and Future Work
We show that deep learning models can be used effectively for joins, and we provide these models to the community for use. In the future, we will evaluate our work on other entity types and continue to explore refinements of both loss functions and triplet selection.
- [Cao et al.2018] Cao, Q.; Shen, L.; Xie, W.; Parkhi, O. M.; and Zisserman, A. 2018. Vggface2: A dataset for recognising faces across pose and age. In 13th IEEE International Conference on Automatic Face & Gesture Recognition, FG 2018, Xi’an, China, May 15-19, 2018, 67–74.
- [Chen et al.2018] Chen, S.; Gong, C.; Yang, J.; Li, X.; Wei, Y.; and Li, J. 2018. Adversarial metric learning. CoRR abs/1802.03170.
[Cho et al.2014]
Cho, K.; van Merriënboer, B.; Gülçehre, Ç.; Bahdanau, D.;
Bougares, F.; Schwenk, H.; and Bengio, Y.
Learning phrase representations using rnn encoder–decoder for
statistical machine translation.
Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), 1724–1734. Doha, Qatar: Association for Computational Linguistics.
- [Cohen, Ravikumar, and Fienberg2003] Cohen, W. W.; Ravikumar, P.; and Fienberg, S. E. 2003. A comparison of string distance metrics for name-matching tasks. In Proceedings of IJCAI-03 Workshop on Information Integration, 73–78.
- [Hashimoto et al.2017] Hashimoto, K.; Xiong, C.; Tsuruoka, Y.; and Socher, R. 2017. A Joint Many-Task Model: Growing a Neural Network for Multiple NLP Tasks. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP), 446–456. Copenhagen, Denmark: Association for Computational Linguistics.
- [He, Ganjam, and Chu2015] He, Y.; Ganjam, K.; and Chu, X. 2015. Sema-join: Joining semantically-related tables using big table corpora. Proc. VLDB Endow. 8(12):1358–1369.
- [Huang et al.2012] Huang, G. B.; Mattar, M.; Lee, H.; and Learned-Miller, E. 2012. Learning to align from scratch. In NIPS.
- [Johnson, Douze, and Jégou2017] Johnson, J.; Douze, M.; and Jégou, H. 2017. Billion-scale similarity search with gpus. arXiv preprint arXiv:1702.08734.
- [Kilias et al.2018] Kilias, T.; Loser, A.; Gers, F.; Koopmanschap, R.; Zhang, Y.; and Kersten, M. 2018. Idel: In-database entity linking with neural embeddings. https://arxiv.org/pdf/1803.04884.pdf.
[Kumar et al.2017]
Kumar, B. G. V.; Harwood, B.; Carneiro, G.; Reid, I. D.; and Drummond, T.
Smart mining for deep metric learning.
IEEE International Conference on Computer Vision, ICCV.
[Muja and Lowe2014]
Muja, M., and Lowe, D. G.
Scalable nearest neighbor algorithms for high dimensional data.IEEE Transactions on Pattern Analysis and Machine Intelligence 36(11):2227–2240.
- [Neves and Bordawekar2018] Neves, J. L., and Bordawekar, R. 2018. Demonstrating ai-enabled sql queries over relational data using a cognitive database. Knowledge Discovery and Data Mining.
[Schroff, Kalenichenko, and
Schroff, F.; Kalenichenko, D.; and Philbin, J.
Facenet: A unified embedding for face recognition and clustering.
IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7-12, 2015, 815–823.
- [Sohn2016] Sohn, K. 2016. Improved deep metric learning with multi-class n-pair loss objective. In Lee, D. D.; Sugiyama, M.; Luxburg, U. V.; Guyon, I.; and Garnett, R., eds., Advances in Neural Information Processing Systems 29. Curran Associates, Inc. 1857–1865.
- [Song et al.2016] Song, H. O.; Xiang, Y.; Jegelka, S.; and Savarese, S. 2016. Deep metric learning via lifted structured feature embedding. In CVPR, 4004–4012. IEEE Computer Society.
- [Song et al.2017] Song, H. O.; Jegelka, S.; Rathod, V.; and Murphy, K. 2017. Deep metric learning via facility location. In Computer Vision and Pattern Recognition (CVPR).
- [Wang et al.2017] Wang, J.; Zhou, F.; Wen, S.; Liu, X.; and Lin, Y. 2017. Deep metric learning with angular loss. CoRR abs/1708.01682.
- [Yang, Zhou, and Wang2018] Yang, X.; Zhou, P.; and Wang, M. 2018. Person reidentification via structural deep metric learning. IEEE Transactions on Neural Networks and Learning Systems 1–12.
- [Yi et al.2014] Yi, D.; Lei, Z.; Liao, S.; and Li, S. Z. 2014. Learning face representation from scratch. CoRR abs/1411.7923.
- [Zhang, Gong, and Wang2016] Zhang, S.; Gong, Y.; and Wang, J. 2016. Deep metric learning with improved triplet loss for face clustering in videos. In 17th Pacific-Rim Conference on Advances in Multimedia Information Processing - Volume 9916, PCM 2016, 497–508. New York, NY, USA: Springer-Verlag New York, Inc.
- [Zhu, He, and Chaudhuri2017] Zhu, E.; He, Y.; and Chaudhuri, S. 2017. Auto-join: Joining tables by leveraging transformations. International Conference on Very Large Databases (VLDB).