Convolutional Embedding for Edit Distance

01/31/2020 ∙ by Xinyan Dai, et al. ∙ The Chinese University of Hong Kong 0

Edit-distance-based string similarity search has many applications such as spell correction, data de-duplication, and sequence alignment. However, computing edit distance is known to have high complexity, which makes string similarity search challenging for large datasets. In this paper, we propose a deep learning pipeline (called CNN-ED) that embeds edit distance into Euclidean distance for fast approximate similarity search. A convolutional neural network (CNN) is used to generate fixed-length vector embeddings for a dataset of strings and the loss function is a combination of the triplet loss and the approximation error. To justify our choice of using CNN instead of other structures (e.g., RNN) as the model, theoretical analysis is conducted to show that some basic operations in our CNN model preserve edit distance. Experimental results show that CNN-ED outperforms data-independent CGK embedding and RNN-based GRU embedding in terms of both accuracy and efficiency by a large margin. We also show that string similarity search can be significantly accelerated using CNN-based embeddings, sometimes by orders of magnitude.



There are no comments yet.


This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Given two strings and , their edit distance is the minimum number of edit operations (i.e., insertion, deletion and substitution) required to transform into (or into ). As a metric, edit distance is widely used to evaluate the similarity between strings. Edit-distance-based string similarity search has many important applications including spell corrections, data de-duplication, entity linking and sequence alignment (Deng et al., 2014a; Yu et al., 2016; Jiang et al., 2014).

The high computational complexity of edit distance is the main obstacle for string similarity search, especially for large datasets with long strings. For two strings with length , computing their edit distance has time complexity using the best algorithm known so far (Masek and Paterson, 1980). There are evidences that this complexity cannot be further improved (Backurs and Indyk, 2015). Pruning-based solutions have been used to avoid unnecessary edit distance computation (Li et al., 2011; Bayardo et al., 2007; Xiao et al., 2008b; Qin et al., 2011b; Wang et al., 2012). However, it is reported that pruning-based solutions are inefficient when a string and its most similar neighbor have a large edit distance (Zhang and Zhang, 2017), which is common for datasets with long strings.

Metric embedding has been shown to be successful in bypassing distances with high computational complexity (e.g., Wasserstein distance (Courty et al., 2018)). For edit distance, a metric embedding model can be defined by an embedding function and a distance measure such that the distance in the embedding space approximates the true edit distance, i.e., . A small approximation error () is crucial for metric embedding. For similarity search applications, we also want the embedding to preserve the order of edit distance. That is, for a triplet of strings, , and , with , it should ensure that . In this paper, we evaluate the accuracy of the embedding methods using both approximation error and order preserving ability.

Several methods have been proposed for edit distance embedding. Ostrovsky and Rabani embed edit distance into with a distortion111An embedding method is said to have a distortion of if there exists a positive constant that satisfies , in which is a scaling factor (Courty et al., 2018). of  (Ostrovsky and Rabani, 2007) but the algorithm is too complex for practical implementation. The CGK algorithm embeds edit distance into Hamming distance and the distortion is  (Chakraborty et al., 2016), in which is the true edit distance. CGK is simple to implement and shown to be effective when incorporated into a string similarity search pipeline. Both Ostrovsky and Rabani’s method and CGK are data-independent while learning-based methods can provide better embedding by considering the structure of the underlying dataset. GRU (Zhang et al., 2020)

trains a recurrent neural network (RNN) to embed edit distance into Euclidean distance. Although GRU outperforms CGK, its RNN structure makes training and inference inefficient. Moreover, its output vector (i.e.,

) has a high dimension, which results in complicated distance computation and high memory consumption. As our main baseline methods, we discussion CGK and GRU in more details in Section 2.

To tackle the problems of GRU, we propose CNN-ED, which embeds edit distance into Euclidean distance using a convolutional neural network (CNN). The CNN structure allows more efficient training and inference than RNN, and we constrain the output vector to have a relatively short length (e.g., 128). The loss function is a weighted combination of the triplet loss and the approximation error, which enforces accurate edit distance approximation and preserves the order of edit distance at the same time. We also conducted theoretical analysis to justify our choice of CNN as the model structure, which shows that the operations in CNN preserve edit distance to some extent. In contrasts, similar analytical results are not known for RNN. As a result, we observed that for some datasets a randomly initialized CNN (without any training) already provides better embedding than CGK and fully trained GRU.

We conducted extensive experiments on 5 datasets with various cardinalities and string lengths. The results show that CNN-ED outperforms both CGK and GRU in approximation accuracy, computation efficiency, and memory consumption. The approximation error of CNN-ED can be only 50% of GRU even if CNN-ED uses an output vector that is two orders of magnitude shorter than GRU. For training and inference, the speedup of CNN-ED over GRU is up to 30x and 200x, respectively. Using the embeddings for string similarity join, CNN-ED outperforms EmbedJoin (Zhang and Zhang, 2017), a state-of-the-art method. For threshold based string similarity search, CNN-ED reaches a recall of 0.9 up to 200x faster compared with HSsearch (Wang et al., 2015). Moreover, CNN-ED is shown to be robust to hyper-parameters such as output dimension and the number of layers.

To summarize, we made three contributions in this paper. First, we propose a CNN-based pipeline for edit distance embedding, which outperforms existing methods by a large margin. Second, theoretical evidence is provided for using CNN as the model for edit distance embedding. Third, extensive experiments are conducted to validate the performance of the proposed method.

The rest of the paper is organized as follows. Section 2 introduces the background of string similarity search and two edit distance embedding algorithms, i.e., CGK and GRU. Section 3 presents our CNN-based pipeline and conduct theoretical analysis to justify using CNN as the model. Section 4 provides experimental results about the accuracy, efficiency, robustness and similarity search performance of the CNN embedding. The concluding remarks are given in Section 5.

2. Background and Related Work

In this part, we introduce two string similarity search problems, and then discuss two existing edit distance embedding methods, i.e., CGK (Chakraborty et al., 2016) and GRU (Zhang et al., 2020).

  Input: A string for some

, and a random matrix

  Output: An embedding sequence
  Interpret as functions , with , where denotes the character in
  Initialize ,
  for  do
     if   then
         means concatenation
                  pad with a special character
     end if
  end for
Algorithm 1 CGK Embedding

2.1. String Similarity Search

There are two well-known string similarity search problems, similarity join (Li et al., 2011; Xiao et al., 2008b; Bayardo et al., 2007) and threshold search (Wang et al., 2015). For a dataset containing strings, similarity join finds all pairs of strings with and , in which is a threshold for the edit distance between similar pairs. A number of methods (Bayardo et al., 2007; Li et al., 2011; Wang et al., 2010; Xiao et al., 2008a, b; Deng et al., 2014b; Chaudhuri et al., 2006) have been developed for similarity join but they are shown to be inefficient when the strings are long and is large. EmbedJoin (Zhang and Zhang, 2017) utilizes the CGK embedding (Chakraborty et al., 2016) and is currently the state-of-the-art method for similarity join on long strings. For a given query string , threshold search (Li et al., 2007; Wang et al., 2012; Qin et al., 2011a; Zhang et al., 2010; Deng et al., 2014a; Sun et al., 2019) finds all strings that satisfies . HSsearch (Wang et al., 2015) is one state-of-the-art method for threshold search, and outperforms Adapt (Wang et al., 2012), QChunk (Qin et al., 2011b) and -tree (Zhang et al., 2010). Similarity join is usually evaluated by the time it takes to find all similar pairs (called end-to-end time), while threshold search is evaluated by the average query processing time.

2.2. CGK Embedding

Algorithm 1 describes the CGK algorithm (Chakraborty et al., 2016), which embeds edit distance into Hamming distance. It assumes that the longest string in the dataset has a length of and the characters in the strings come from a known alphabet .

is a random binary matrix in which each entry is 0 or 1 with equal probability.

is a special character used for padding. Denote the CGK embeddings of two string and as and , respectively. The following relation holds with high probability,


in which is the Hamming distance between and .

Figure 1. The model architecture of GRU
Figure 2. The model architecture of CNN-ED

2.3. GRU Embedding

RNN is used to embed edit distance into Euclidean distance in GRU (Zhang et al., 2020). The network structure of GRU is shown in Figure 1

, which consists of two layers of gated recurrent unit (GRU) and a linear layer. A string

is first padded to a length of (the length of the longest string in the dataset) and then each of its element is fed into the network per step. The outputs of the steps are concatenated as the final embedding. The embedding function of GRU can be expressed as follows


As GRU uses the concatenation of the outputs, the embedding has a high dimension and takes up a large amount of memory. The network is trained with a three-phase procedure and a different loss function is used in each phase.

3. Cnn-Ed

We now present our CNN-based model for edit distance embedding. We first introduce the details of the learning pipeline, including input preparation, network structure, loss function and training method. Then we report an interesting phenomenon–a random CNN without training already matches or even outperforms GRU, which serves as a strong empirical evidence that CNN is suitable for edit distance embedding. Finally, we justify this phenomenon with theoretical analysis, which shows that operations in GNN preserves a bound on edit distances.

3.1. The Learning Pipeline

We assume that there is a training set with strings. The strings (including training set, base dataset and possible queries) that we are going to apply our model on have a maximum length of , and their characters come from a known alphabet with size . denotes the character in . For two vectors and , we use to denote their Euclidean distance.

One-hot embedding as input. For each training string , we generate an one-hot embedding matrix of size as the input for the model as follows,


For example, for {‘A’, ‘G’, ‘C’, ‘T’} and “CATT” and , we have . Intuitively, each row of (e.g., ) encodes a character (e.g., ) in , and if that character appears in certain position of (e.g., ), we mark the corresponding position in that row as 1 (e.g., ). In the example, the fourth row of () encodes the fourth character (i.e., ‘T’). and because ‘T’ appears on the and position of . If string has a length , the last columns of are filled with 0. In this way, we generate fixed-size input for the CNN.

Network structure. The network structure of CNN-ED is shown in Figure 2, which starts with several one-dimensional convolution and pooling layers. The convolution is conducted on the rows of and always uses a kernel size of 3. By default, there are 8 kernels for each convolutional layer and 10 convolutional layers. The last layer is a linear layer that maps the intermediate representations to a pre-specified output dimension of (128 by default). The one-dimensional convolution layers allow the same character in different positions to interact with each other, which corresponds to insertion and deletion in edit distance computation. As we will show in Section 3.2

, max-pooling preserves a bound on edit distance. The linear layer allows the representation for different characters to interact with each other. Our network is typically small and the number of parameters is less than 45k for the DBLP dataset.

Loss function. We use the following combination of triplet loss (Hermans et al., 2017) and approximation error as the loss function

in which is the triplet loss and is the approximation error. is a randomly sampled string triplet, in which is the anchor string, is the positive neighbor that has smaller edit distance to than the negative neighbor . The weight is usually set as 0.1. The triplet loss is defined as

in which is a margin that is specific for each triplet, and is the embedding for . Intuitively, the triplet loss forces the distance gap in the embedding space () to be larger than the edit distance gap (), which helps to preserve the relative order of edit distance. The approximation error is defined as,

in which measures the difference between the Euclidean distance and edit distance for a string pair. Intuitively, the approximation error encourages the Euclidean distance to match the edit distance.

Training and sampling. The network is trained using min-batch SGD and we sample 64 triplets for each min-batch. To obtain a triplet, a random string is sampled from the training set as . Then two of its top- neighbors ( by default) are sampled, and the one having smaller edit distance with is used as while the other one is used as . For a training set with cardinality

, we call it an epoch when

triplets are used in training.

Using CNN embedding in similarity search. The most straightforward application of the embedding is to use it to filter unnecessary edit distance computation. We demonstrate this application in Algorithm 2 for approximate threshold search. The idea is to use low-cost distance computation in the embedding space to avoid expensive edit distance computation. More sophisticated designs to better utilize the embedding are possible but is beyond the scope of this paper. For example, the embeddings can also be used to generate candidates for similarity search following the methodology of EmbedJoin, which builds multiple hash tables using CGK embedding and locality sensitive hashing (LSH) (Datar et al., 2004; Andoni et al., 2015; Wang et al., 2014). To avoid computing all-pair distances in the embedding space, approximate Euclidean distance similarity methods such as vector quantization (Jégou et al., 2011; Ge et al., 2013) and proximity graph (Malkov and Yashunin, 2016; Fu et al., 2019) can be used. Finally, it is possible to utilize multiple sets of embeddings trained with different initializations to provide diversity and improve the performance.

  Input: A query string , a string dataset , the embeddings of the strings , a model , a threshold and a blow-up factor
  Output: Strings with
  Initialize the candidate set and result set as
  Compute the embedding of the query string
  for each emebdding in  do
     if  then
     end if
  end for
  for each string in  do
     if  then
     end if
  end for
Algorithm 2 Using Embedding for Approximate Threshold Search
Figure 3. Recall-item curve comparison for random CNN (denote as RND), CGK and GRU on the Enron dataset
Figure 4. Recall-item curve comparison for random CNN (denote as RND), CGK and GRU on more datasets

3.2. Why CNN is the Right Model?

Performance of random CNN. In Figure 3 and Figure 4, we compare the performance of CGK and GRU with a randomly initialized CNN, which has not been trained. The CNN contains 8 convolutional layers and uses max-pooling. The recall-item curve is defined in Section 4 and higher recall means better performance. The statistics of the datasets can be found in Table 1. The results show that a random CNN already outperforms CGK on all datasets and for different value of . The random CNN also outperforms fully trained GRU on Trec and Gen50ks, and is comparable to GRU on Uniref. Although random CNN does not perform as good as GRU on DBLP, the performance gap is not large. On the Enron dataset, random GNN slightly outperforms GRU for different values of .

This phenomenon suggests that the CNN structure may have some properties that suit edit distance embedding. This is against common sense as strings are sequences and RNN should be good at handling sequences. To better understand this phenomenon, we analyze how the operations in our CNN model affects edit distance approximation. Basically, the results show that one-hot embedding and max-pooling preserve bounds on edit distance.

Theorem 1 (One-Hot Deviation Bound).

Given two strings and their corresponding one-hot embeddings and , defining the binary edit distance as , we have


For the upper bound, note that by modifying the operations in the shortest edit sequence 222The edit sequence between two strings is a sequence of operations that transfer one string to the other one.333The shortest edit sequence is one of the edit sequences with minimum length, i.e. the edit distance. of changing into to binary operations, we can use this sequence to transform into , for any . Since a substitution in the original sequence may be modified into ‘’, which is not needed, it satisfies that

Summing this bound for , we obtain the upper bound.

For the lower bound, letting be the string of replacing the character in that is not with a special character , where is the character in the alphabet, we can conclude that

where is the number of character in .

Using the triangle inequality of edit distance, for any , we have

Summing this inequality for and using that

we obtain

Re-arranging this inequality completes the proof.

Note that the bound in Theorem 1 can be tightened by choosing as . Theorem 1 essentially shows that a bound on the true edit distance can be constructed by the sum of the edit distances of binary sequences. These binary sequences are exactly the rows of the one-hot embedding matrices and . This justifies our choice of using one-hot embedding as the input for the network.

Theorem 2 (Max-Pooling Deviation Bound).

Given two binary vectors and a max-pooling operation on

with stride

and size , assuming that and are divisible by , the following holds:


Using the triangle inequality of edit distance, we have

Applying this inequality again for , we obtain


Denote as the string of replicating each bit of times. For the substitutions, insertions and deletions in the edit sequence of , we can repeat these operations for the corresponding replicated bits in , which transform into . Thus, we conclude that .

Using triangle inequality, it satisfies that

For , if a bit is in , its corresponding window in and must be all ; if the bit is , the number of different bits in the corresponding window of and is upper-bounded by , which implies that , where denotes the number of in . Thus,

Rearranging this lower bound and the bound (4) complete the proof. ∎

Theorem 2 shows that max-pooling preserves a bound on the edit distance of binary vectors. Combining with Theorem 1, it also shows that max-pooling preserves a bound on the true edit distance . Our randomly initialized network can be viewed as a stack of multiple max-pooling layers, which explains its good performance shown in Figure 3 and Figure 4. However, similar analysis is difficult for RNN as an input character passes through the network in many time steps and the influence on edit distance is hard to capture.

Figure 5.

True edit distance (horizontal axis) vs. estimated edit distance (vertical axis) for CNN-ED

4. Experimental Results

We conduct extensive experiments to evaluate the performance of CNN-ED. Two existing edit distance embedding methods, CGK and GRU, are used as the main baselines. We first introduce the experiment settings, and evaluate the quality of the embeddings generated by CNN-ED. Then, we assess the efficiency of the embedding methods in terms of both computation and storage costs. To demonstrate the benefits of vector embedding, we also test the performance of CNN-ED when used for similarity join and threshold search. Finally, we test the influence of the hyper-parameters (e.g., output dimension, network structure, loss function) on performance. For conciseness, we use CNN to denote CNN-ED in this section. All code for the experiments can be found via an anonymous link444

(will open source too).

DataSet UniRef DBLP Trec Gen50ks Enron
# Items 400,000 1,385,451 347,949 50,001 245,567
Avg. Length 446 106 845 5,000 885
Max. Length 35,214 1,627 3,948 5,153 59,420
Alphabet Size 24 37 37 4 37
Table 1. Dataset statistics

4.1. Experiment Settings

We conduct the experiments with the fives datasets in Table 1, which have diverse cardinalities and string lengths. As GRU cannot handle very long strings, we truncated the strings longer than 5,000 in UniRef and Enron to a length of 5,000 following the GRU paper (Zhang et al., 2020). Moreover, as the memory consumption of GRU is too high for datasets with large cardinality, we sample 50,000 item from each dataset for comparisons that involve GRU. In experiments that do not involve GRU, the entire dataset is used. By default, CNN-ED uses 10 one-dimensional convolutional layers with a kernel size of 3 and one linear layer. The dimension of the output embedding is 128.

Figure 6. Recall-item curve comparison among CGK, GRU, CNN for top- search on the Enron dataset
Figure 7. Recall-item curve comparison among CGK, GRU, CNN for top- search on more datasets

All experiments are conducted on a machine equipped with GeForce RTX 2080 Ti GPU, 2.10GHz E5-2620 Intel(R) Xeon(R) CPU (16 physical cores), and 48GB RAM. The neural network training and inference experiments are conducted on the GPU while the rest of the experiments are conducted on the CPU. By default, the CPU experiments are conducted using a single thread. For GRU and CNN-ED, we partition each dataset into three disjoint sets, i.e., training set, query set and base set. Both the training set and the query set contain 1,000 items and the other items go to the base set. We used only the training set to tune the models and the performance of the models are evaluated on the other two sets. GRU is trained for 500 epochs as suggested in its code, while CNN-ED is trained for 50 epochs.

4.2. Embedding Quality

We assess the quality of the embedding generated by CNN from two aspects, i.e., approximation error and the ability to preserve edit distance order.

DataSet UniRef DBLP Trec Gen50ks Enron
CGK 0.590 63.602 6.856 0.452 0.873
GRU 0.275 0.175 46.840 0.419 0.126
CNN 0.125 0.087 0.141 0.401 0.123
Table 2. Average edit distance estimation error

To provide an intuitive illustration of the approximation error of the CNN embeddings, we plot the true edit distance and the estimated edit distance of 1,000 randomly sampled query-item pairs in Figure 5. The estimated edit distance of a string pair is computed using a linear function of the Euclidean distance . The linear function is introduced to account for possible translation and scaling between the two distances, and it is fitted on the training set without information from the base and query set. The results show that the distance pairs locate closely around the line (the black one), which suggests that CNN embeddings provide good edit distance approximation.

To quantitatively compare the approximation error of the embedding methods, we report the average edit distance estimation error in Table 2. The estimation error for a string pair is defined as , in which is the true edit distance and is the edit distance estimated from embeddings. The distance function is Hamming distance for CGK and Euclidean distance for GRU and CNN. is a function used to calculate edit distance using distance in the embedding space, and it is fitted on the training set. We set as a linear function for GRU and CNN, and a quadratic function for CGK as the theoretical guarantee of CGK in Equation (1

) has a quadratic form. The reported estimation error is the average of all possible query-item pairs. The results show that CNN has the smallest estimation error on all five datasets, while overall CGK has the largest estimation error. This is because CGK is data-independent, while GRU and CNN use machine learning to fit the data. The performance of GRU is poor on the Trec dataset and similar phenomenon is also reported in its original paper 

(Zhang et al., 2020).

To evaluate the ability of the embeddings to preserve edit distance order, we plot the recall-item curve in Figure 6 and Figure 7. The recall-item curve is widely used to evaluate the performance of metric embedding. To plot the curve, we first find the top- most similar strings for each query in the base set using linear scan. Then, for each query, items in the base set are ranked according to their distance to the query in the embedding space. If the items ranking top contain of the true top- neighbors, the recall is . For each value of , we report the average recall of the 1,000 queries. Intuitively, a good embedding should ensure that a neighbor with a high rank in edit distance (i.e., having smaller edit distance than most items) also has a high rank in embedding distance. In this case, the recall is high for a relatively small . The results show that CNN consistently outperforms CGK and GRU on all five datasets and for different values of . The recall-item performance also agrees with the estimation error in Table 2. CNN has the biggest advantage in estimation error on Trec and its item-recall performance is also significantly better than CGK and GRU on this dataset. On Gen50ks, GRU and CNN have similar estimation error, and the item-recall performance of CNN is only slightly better than GRU.

4.3. Embedding Efficiency

We compare the efficiency of the embedding algorithms from various aspects in Table 3. Train time is the time to train the model on the training set, and embed time is the average time to compute the embedding for a string (also called inference). Compute time is the average time to compute the distance between a pair of strings in the embedding space. Embed size is the memory consumption for storing the embeddings of a dataset, and Raw is the size of the original dataset. Note that the embed time of GRU and CNN is measured on GPU, while the embed time of CGK is measured on CPU as the CGK algorithm is difficult to parallelize on GPU. For GRU, the embedding size of the entire dataset is estimated using a sample of 50,000 strings.

The results show that CNN is more efficient than GRU in all aspects. CNN trains and computes string embedding at least 2.6x and 13.7x faster than GRU, respectively. Moreover, CNN takes more than 290x less memory to store the embedding, and computes distance in the embedding space over 400x faster. When compared with CGK, CNN also has very attractive efficiency. CNN computes approximate edit distance at least 14x faster than CGK and uses at least an order of magnitude less memory. We found that CNN is more efficient than GRU and CGK mainly because it has much smaller output dimension. For example, on the Gen50ks dataset, the output dimensions of CGK and GRU are 322x and 121x of CNN, respectively. Note that even with much smaller output dimension, CNN embedding still provides more accurate approximation for edit distance than CGK and GRU, as we have shown in Section 4.2. CGK embeds strings faster than both GRU and CNN as the two learning-based models need to conduct neural network inference while CGK follows a simple random procedure.

DataSet Method UniRef DBLP Trec Gen50ks Enron
Time (s)
GRU 31.8 13.2 26.3 31.3 34.9
CNN 4.31 4.96 5.19 1.61 5.63

CGK 52.2 16.2 56.6 105.6 63.6

GRU 8332 2340 7654 12067 7650
CNN 378.8 134.8 361.8 172.2 548.4

CGK 1.72 0.60 1.36 1.65 1.71

GRU 123.7 47.2 129.1 18.0 177.7

CNN() 4.6 4.2 4.2 4.2 4.5

Raw 170MB 140MB 280MB 238MB 207MB
CGK 5.59GB 6.45GB 4.86GB 0.70GB 3.43GB
GRU 372GB 621GB 378GB 7GB 338GB
CNN 195MB 676MB 169MB 24MB 119MB
Table 3. Embedding efficiency comparison

4.4. Similarity Search Performance

In this part, we test the performance of CNN when used for the two string similarity search problems discussed in Section 2, threshold search and similarity join.

For threshold search, model training and dataset embedding are conducted before query processing. When a query comes, we first calculate its embedding, and then use the distances in the embedding space to rank the items, and finally conduct exact edit distance computation in the ranked order. Following (Zhang and Zhang, 2017), the thresholds for UniRef, DBLP, Trec, Gen50ks and Enron are set as 100, 40, 40, 100 and 40, respectively. We compare with HSsearch, which supports threshold search with a hierarchical segment index. For CNN, we measure the average query processing time when reaching certain recall, where recall is defined as the number of returned similar string pairs over the total number of ground truth similar pairs.

DataSet UniRef DBLP Trec Gen50ks Enron
HSsearch 4333 6907 222 393 76
CNN(R=0.6) 26 263 37 1.73 12
CNN(R=0.8) 66 478 44 1.74 13
CNN(R=0.9) 143 1574 58 1.74 15
CNN(R=0.95) 254 2296 80 1.75 15
CNN(R=0.99) 1068 3560 93 1.77 21
CNN(R=1) 3007 4321 116 1.79 22
Table 4. Average query time for threshold search (in ms)

The results in Table 4 show that when approximate search is acceptable, CNN can achieve very significant speedup over HSsearch. At a recall of 0.6, the speedup over HSsearch is at least 6x and can be as much as 227x. In principle, CNN is not designed for exact threshold search as there are errors in its edit distance approximation. However, it also outperforms HSsearch when the recall is 1, which means all ground truth similar pairs are returned, and the speedup is at least 1.44x for the five datasets.

Recall 0.6 0.8 0.9 0.95 0.99 1
CNN 5.3 6.7 8.6 14.3 60.2 91.0
GRU 2980.5 3012.1 3059.6 3059.6 3590.2 3590.2
Table 5. Average query time for GRU and CNN (in ms)

To demonstrate the advantage of the accurate and efficient embedding provided by CNN, we compare CNN and GRU for threshold search in Table 5. The dataset is a sample with 50,000 items from the DBLP dataset (different from the entire DBLP dataset used in Table 4). We conduct sampling because the GRU embedding for the whole dataset does not fit into memory. The results show that CNN can be up to 500x faster than GRU for attaining the same recall. Detailed profiling finds that distance computation is inefficient with long GRU embedding (as shown in Table 3) and CNN embedding better preserve the order of edit distance (as shown in Figure 7).

We compare CNN with EmbedJoin and PassJoin for similarity join in Figure 8. PassJoin partitions a string into a set of segments and creates inverted index for the segments, then generates similar string pairs using the inverted index. EmbedJoin uses the CGK embedding and locality sensitive hashing (LSH) to filter unnecessary edit distance computations, which is the state-of-the-art method for string similarity join. Note that PassJoin is an exact method while EmbedJoin is an approximate method. Different from threshold search, the common practice for similarity join is to report the end-to-end time, which includes both pre-processing time (e.g., index building in EmbedJoin) and edit distance computation time. Therefore, we include the time for training the model and embedding the dataset in the results of CNN. For Gen50ks and Trec, the thresholds for similar string pairs are set as 150 and 40, respectively, following the EmbedJoin paper. For EmbedJoin and CNN, we report the time taken to reach a recall of 0.99.

The results show that EmbedJoin outperforms CNN on the Gen50ks dataset but CNN performs better than EmbedJoin on the Trec dataset. To investigate the reason behind the results, we decompose the running time of CNN into training time, embedding time and search time in Table 6. On the smaller Gen50ks dataset (with 50,000 items), CNN takes 160.1s, 8.6s and 48.8s for training, embedding and search, respectively, while EmbedJoin takes 52.8 seconds in total. The results suggest that CNN performs poorly on Gen50ks because the dataset is small and the long training time cannot be amortized. On the larger Trec dataset (with 347,949 items), the training time (and embedding time) is negligible (only 5% of the total time) and CNN is 1.76x faster than EmbedJoin due to its the high quality embedding. Therefore, we believe CNN will have a bigger advantage over EmbedJoin on larger dataset. We have tried to run the algorithms on the DBLP dataset but both PassJoin and EmbedJoin fail.

This set of experiments shows that CNN embeddings provide promising results for string similarity search. We believe that more sophisticated designs are possible to further improve performance, e.g., using multiple sets of independent embeddings to introduce diversity, combining with Euclidean distance similarity methods such as vector quantization and proximity graph, and incorporating the various pruning rules used in existing string similarity search work.

(a) Gen50ks
(b) Trec
Figure 8. Time comparison for similarity join
Dataset CNN-Train CNN-Embed CNN-Search EmbedJoin
Gen50ks 160.1 8.6 48.8 52.8
Trec 510.7 190.2 8924.6 16944.0
Table 6. Time decomposition for similarity join (in seconds)
(a) Epoch count
(b) Output dimension
(c) Number of layer
(d) Number of kernel
(e) Loss function
(f) Pooling function
Figure 9. Influence of the hyper-parameters on item-recall performance for Enron dataset (best viewed in color)

4.5. Influence of Model Parameters

We evaluate the influence of the hyper-parameters on the performance of CNN embedding in Figure 9. The dataset is Enron and we use the recall-item curve as the performance measure, for which higher recall means better performance.

Figure 8(a) shows that the quality of the embedding improves quickly in the initial stage of training and stabilizes after 50 epochs, which suggests that CNN is easy to train. In Figure 8(b), we test the performance of CNN using different output dimensions. The results show that the performance improves considerably when increasing the output dimension from 8 to 32 but does not change much afterwards. It suggests that a small output dimension is sufficient for CNN, while CGK and GRU need to use a large output dimension, which slows down distance computation and takes up a large amount of memory.

Figure 8(c) shows the performance of CNN when increasing the number of convolutional layers from 8 to 12. The results show that the improvements in performance is marginal with more layers and thus there is no need to use a large number of layers. This is favorable as using a large number of layers makes training and inference inefficient. We show the performance of using different number of convolution kernels in a layer in Figure 8(d). The results show that performance improves when we increase the number of kernels to 8 but drops afterwards.

We report the performance of CNN using different loss functions in Figure 8(e). Recall that we use a combination of the triplet loss and the approximation error to train CNN. In Figure 8(e), Triplet Loss means using only the triplet loss while Pairwise Loss means using only the approximation error. The results show that using a combination of the two loss terms performs better than using a single loss term. The performance of maximum pooling and average pooling is shown in Figure 8(f). The results show that average pooling performs better than maximum pooling. Therefore, it will be interesting to extend our analysis on maximum pooling in Section 3 to more pooling methods.

5. Conclusions

In this paper, we proposed CNN-ED, a model that uses convolutional neural network (CNN) to embed edit distance into Euclidean distance. A complete pipeline (including input preparation, loss function and sampling method) is formulated to train the model end to end and theoretical analysis is conducted to justify choosing CNN as the model structure. Extensive experimental results show that CNN-ED outperforms existing edit distance embedding method in terms of both accuracy and efficiency. Moreover, CNN-ED shows promising performance for edit distance similarity search and is robust to different hyper-parameter configurations. We believe that incorporating CNN embeddings to design efficient string similarity search frameworks is a promising future direction.


  • A. Andoni, P. Indyk, T. Laarhoven, I. P. Razenshteyn, and L. Schmidt (2015) Practical and optimal LSH for angular distance. See DBLP:conf/nips/2015, pp. 1225–1233. External Links: Link Cited by: §3.1.
  • A. Backurs and P. Indyk (2015) Edit distance cannot be computed in strongly subquadratic time (unless SETH is false). See DBLP:conf/stoc/2015, pp. 51–58. External Links: Link, Document Cited by: §1.
  • R. J. Bayardo, Y. Ma, and R. Srikant (2007) Scaling up all pairs similarity search. See DBLP:conf/www/2007, pp. 131–140. External Links: Link, Document Cited by: §1, §2.1.
  • D. Chakraborty, E. Goldenberg, and M. Koucký (2016) Streaming algorithms for embedding and computing edit distance in the low distance regime. See DBLP:conf/stoc/2016, pp. 712–725. External Links: Link, Document Cited by: §1, §2.1, §2.2, §2.
  • S. Chaudhuri, V. Ganti, and R. Kaushik (2006) A primitive operator for similarity joins in data cleaning. See DBLP:conf/icde/2006, pp. 5. External Links: Link, Document Cited by: §2.1.
  • N. Courty, R. Flamary, and M. Ducoffe (2018) Learning wasserstein embeddings. See DBLP:conf/iclr/2018, External Links: Link Cited by: §1, footnote 1.
  • M. Datar, N. Immorlica, P. Indyk, and V. S. Mirrokni (2004) Locality-sensitive hashing scheme based on p-stable distributions. See DBLP:conf/compgeom/2004, pp. 253–262. External Links: Link, Document Cited by: §3.1.
  • D. Deng, G. Li, and J. Feng (2014a) A pivotal prefix based filtering algorithm for string similarity search. See DBLP:conf/sigmod/2014, pp. 673–684. External Links: Link, Document Cited by: §1, §2.1.
  • D. Deng, G. Li, S. Hao, J. Wang, and J. Feng (2014b) MassJoin: A mapreduce-based method for scalable string similarity joins. See DBLP:conf/icde/2014, pp. 340–351. External Links: Link, Document Cited by: §2.1.
  • C. Fu, C. Xiang, C. Wang, and D. Cai (2019) Fast approximate nearest neighbor search with the navigating spreading-out graph. PVLDB 12 (5), pp. 461–474. External Links: Link, Document Cited by: §3.1.
  • T. Ge, K. He, Q. Ke, and J. Sun (2013) Optimized product quantization for approximate nearest neighbor search. See DBLP:conf/cvpr/2013, pp. 2946–2953. External Links: Link, Document Cited by: §3.1.
  • A. Hermans, L. Beyer, and B. Leibe (2017) In defense of the triplet loss for person re-identification. CoRR abs/1703.07737. External Links: Link, 1703.07737 Cited by: §3.1.
  • H. Jégou, M. Douze, and C. Schmid (2011) Product quantization for nearest neighbor search. TPAMI 33 (1), pp. 117–128. External Links: Link, Document Cited by: §3.1.
  • Y. Jiang, G. Li, J. Feng, and W. Li (2014) String similarity joins: an experimental evaluation. PVLDB 7 (8), pp. 625–636. External Links: Link, Document Cited by: §1.
  • C. Li, B. Wang, and X. Yang (2007) VGRAM: improving performance of approximate queries on string collections using variable-length grams. See DBLP:conf/vldb/2007, pp. 303–314. External Links: Link Cited by: §2.1.
  • G. Li, D. Deng, J. Wang, and J. Feng (2011) PASS-JOIN: A partition-based method for similarity joins. PVLDB 5 (3), pp. 253–264. External Links: Link, Document Cited by: §1, §2.1.
  • Y. A. Malkov and D. A. Yashunin (2016) Efficient and robust approximate nearest neighbor search using hierarchical navigable small world graphs. CoRR abs/1603.09320. External Links: Link, 1603.09320 Cited by: §3.1.
  • W. J. Masek and M. Paterson (1980) A faster algorithm computing string edit distances. J. Comput. Syst. Sci. 20 (1), pp. 18–31. External Links: Link, Document Cited by: §1.
  • R. Ostrovsky and Y. Rabani (2007) Low distortion embeddings for edit distance. J. ACM 54 (5), pp. 23. External Links: Link, Document Cited by: §1.
  • J. Qin, W. Wang, Y. Lu, C. Xiao, and X. Lin (2011a) Efficient exact edit similarity query processing with the asymmetric signature scheme. See DBLP:conf/sigmod/2011, pp. 1033–1044. External Links: Link, Document Cited by: §2.1.
  • J. Qin, W. Wang, Y. Lu, C. Xiao, and X. Lin (2011b) Efficient exact edit similarity query processing with the asymmetric signature scheme. See DBLP:conf/sigmod/2011, pp. 1033–1044. External Links: Link, Document Cited by: §1, §2.1.
  • J. Sun, Z. Shang, G. Li, Z. Bao, and D. Deng (2019) Balance-aware distributed string similarity-based query processing system. PVLDB 12 (9), pp. 961–974. External Links: Link, Document Cited by: §2.1.
  • J. Wang, G. Li, and J. Feng (2010) Trie-join: efficient trie-based string similarity joins with edit-distance constraints. PVLDB 3 (1), pp. 1219–1230. External Links: Link, Document Cited by: §2.1.
  • J. Wang, G. Li, and J. Feng (2012) Can we beat the prefix filtering?: an adaptive framework for similarity join and search. See DBLP:conf/sigmod/2012, pp. 85–96. External Links: Link, Document Cited by: §1, §2.1.
  • J. Wang, G. Li, D. Deng, Y. Zhang, and J. Feng (2015) Two birds with one stone: an efficient hierarchical framework for top-k and threshold-based string similarity search. See DBLP:conf/icde/2015, pp. 519–530. External Links: Link, Document Cited by: §1, §2.1.
  • J. Wang, H. T. Shen, J. Song, and J. Ji (2014) Hashing for similarity search: A survey. CoRR abs/1408.2927. External Links: Link, 1408.2927 Cited by: §3.1.
  • C. Xiao, W. Wang, X. Lin, and J. X. Yu (2008a) Efficient similarity joins for near duplicate detection. See DBLP:conf/www/2008, pp. 131–140. External Links: Link, Document Cited by: §2.1.
  • C. Xiao, W. Wang, and X. Lin (2008b) Ed-join: an efficient algorithm for similarity joins with edit distance constraints. PVLDB 1 (1), pp. 933–944. External Links: Link, Document Cited by: §1, §2.1.
  • M. Yu, G. Li, D. Deng, and J. Feng (2016) String similarity search and join: a survey. Frontiers Comput. Sci. 10 (3), pp. 399–417. External Links: Link, Document Cited by: §1.
  • H. Zhang and Q. Zhang (2017) EmbedJoin: efficient edit similarity joins via embeddings. See DBLP:conf/kdd/2017, pp. 585–594. External Links: Link, Document Cited by: §1, §1, §2.1, §4.4.
  • X. Zhang, Y. Yuan, and P. Indyk (2020) Neural embeddings for nearest neighbor search under edit distance. External Links: Link Cited by: §1, §2.3, §2, §4.1, §4.2.
  • Z. Zhang, M. Hadjieleftheriou, B. C. Ooi, and D. Srivastava (2010) Bed-tree: an all-purpose index structure for string similarity search based on edit distance. See DBLP:conf/sigmod/2010, pp. 915–926. External Links: Link, Document Cited by: §2.1.