Neural String Edit Distance
We propose the neural string edit distance model for string-pair classification and sequence generation based on learned string edit distance. We modify the original expectation-maximization learned edit distance algorithm into a differentiable loss function, allowing us to integrate it into a neural network providing a contextual representation of the input. We test the method on cognate detection, transliteration, and grapheme-to-phoneme conversion. We show that we can trade off between performance and interpretability in a single framework. Using contextual representations, which are difficult to interpret, we can match the performance of state-of-the-art string-pair classification models. Using static embeddings and a minor modification of the loss function, we can force interpretability, at the expense of an accuracy drop.
READ FULL TEXT