Text matching is an important research area in several natural language processing (NLP) applications, including, but not limited to information retrieval, natural language inference, question answering and paraphrase identification. In these applications, a model estimates the similarity or relations between two input text sequences and two problems will arise in the process. The first, also common in many NLP tasks, is how to efficiently model or represent texts. The second, specifically for the text matching task, is how to bridge the information gap between two text sequences of non-comparable lengths.
Text matching approaches have successfully introduced many encoder methods or constructed their hybrids to represent texts. Although the representation methods significantly advanced the fields of natural language processing as well as its downstream tasks including text matching applications, they have limitations in transferring information from the inputs to the output representations. Some of them lose important information while handling a fairly long sequence of words, while others that focus on learning the local features are inadequate to represent complex long-form documents. For text matching tasks, it is crucial that the text representations should retain as much useful information of the input data as possible. The other problem of text matching is how to bridge the information gap between two text sequences of lengths with different scales, such as short-short text matching, long-long text matching, and short-long text matching. In all these types, the core information is always hard to be extracted from texts, not only because of the text representation problem above, but also because of different text structures.
Recently, interests have shifted toward mutual information (MI) maximization of representation across multiple domains, including computer vision and NLP. To efficiently model or represent both sides of text pairs in text matching, a natural idea is to train a representation-learning network to maximize the MI between text inputs and representation outputs before matching. However, MI is difficult to estimate especially in high-dimensional and continuous representation space. Fortunately, the recent theoretical breakthrough has made it possible to effectively compute MI between high dimensional input/output pairs of deep neural networks[1, 4]
. Early attempts have been made to solve NLP tasks like text generation and some other kind of tasks like cross-modal retrieval  with MI maximization.
In this paper, we introduce deep mutual information estimation technique, as known as Deep InfoMax (DIM, ) into text matching task. We design a deep MI estimation module to maximize the MI between input text pairs and their learned high-level representations. We start with the text matching neural network model of , and design a wrapping-mode training architecture. In our architecture, we take the whole text matching network as the encoder while MI between the inputs and the outputs is estimated and maximized so that learned representations can retain information of the input data to a great extent. Moreover, maximizing MI between the input data and the encoder output (global MI) is often insufficient for learning useful representations. Recently the method on maximizing the local MI between the representation and local regions of the input (e.g. patches rather than the complete text) is presented (), where the very representation is encouraged to have high MI with all the patches.
So, to preserve the complex structural information and solve the structure difficulty in text matching on varying-length texts, we split input texts into segments as local features, and then maximize the average MI between the high-level representation and local patches of the input text. Our proposed method works effectively and efficiently according to experimental results. The main contributions of this paper are summarized as follows:
We propose a deep neural network with deep mutual information estimation to solve problems of text matching. To the best of our knowledge, this work is the first attempt to apply mutual information neural estimation to improving both representation quantity and diversity of text structures in text matching tasks.
We integrate the global and local mutual information maximization for texts to help well preserve the information in the process between input and output representation. Our model has fewer parameters and doesn’t rely on pretraining on external data, compared to large representation models. This is meaningful to different text matching tasks.
Experimental results on four benchmark datasets across four different tasks are all on par with or even above the state-of-the-art methods, which demonstrate the high effectiveness of our method on text matching tasks.
2 Related Work
), learn embedding vectors as text representations based on different text structure levels, such as words, sentences, paragraphs and documents. They are introduced into several text matching models in which typical similarity metrics are employed to compute the matching scores of two text vectors (WMD). Besides, some latent variable models are introduced into text matching tasks, too. They extract hidden topics from texts, and then the texts can be compared based on their hidden topic representations (
). Recently, deep neural networks become the most popular models for better text representations in NLP tasks, such as convolutional neural networks (CNNs), recurrent neural networks (RNNs) and Long Short-Term Memory architectures (LSTM). Accordingly, many text matching applications take these models as text encoders in their matching processes: ranks short text pairs using CNN which preserves local information in the text representation,  treats texts as a sequence of words in their representation processes and then take RNN as text encoders for text matching on sentences and long-form texts,  shows superiority for representing sentence meaning over a sequential LSTM, and  introduces LSTM to construct better answer representations in question-answer matching. Nowadays, the state-of-the-art representation methods focus on the contextual token representation - to train an encoder to represent words in their specific context, such as BERT and XLNet.
In text matching tasks, a comparably long text may lose its local information after being encoded as a fixed-sized representation. Some of the previous studies () exploit attention mechanism to distill important words from sentences, but valuable information can still be diluted within a large number of sentences in long-form texts. On the other hand, representation of a short text has the sparse problem and may lose the global information of word co-occurrency. For this, some previous studies, such as 
typically, employ alignment architecture to rich the mutual information between the sequence pair in matching and introduces augmented residual connections for the encoder for inputs to retain as much information as possible in its outputs.
focuses on matching question/answer (QA) and adopts the generative adversarial network (GAN) to enhance mutual information by rewriting questions in QA tasks. However MI is able to quantify the dependence of two random variables and to measure non-linear statistical dependencies between variables. implements MI estimation in high-dimensional and continuous scenarios and effectively computed MI between high dimensional input/output pairs of deep neural networks (MINE).  formalizes Deep InfoMax (DIM), which makes it possible to prioritize global or local information and to tune the suitability of learned representations for classification or reconstruction-style tasks. Inspirited by DIM, we introduce deep mutual information estimation and maximization to our deep neural model for more general text matching tasks.
We adopt the neural architecture on text matching introduced in RE2 
and apply MI estimation and maximization method to the representation part of the base text matching architecture. We intend to maximize mutual information of texts in the matching process, but if text matching encoders pass information from only some parts of input, this does not increase the MI with any other parts. Based on this, our model introduces DIM to leverage local regions of the input for better text representation, for the same representation is encouraged to have high MI with all patches, and this mechanism will exert influence on all input data shared across patches. Besides, DIM has the representational capacity of deep neural networks. Therefore, it is very suitable for mutual information estimation of high dimensional data including text data.
For the text matching task, our model employs the local DIM framework to estimate and maximize MI. The overall framework of our proposed architecture is presented in Figure 1. In the DIM network on the left hand of Figure 1, multiple feature maps, treated as local features, are extracted from one input text by our Feature Extraction Method (section 3.1). The local features reflect some structural aspects of the text data, e.g. spatial locality. For the global feature, as shown in the right hand of Figure 1, we take the whole text matching neural network as the DIM Encoder (section 3.2
) of our model, and we take the high-level output representation from its pooling layer as the global feature vector for DIM. Here the DIM network shares the high-level representation with the text matching network output. It is because the base text matching network and MI estimator are optimizing the loss for the same purpose and require similar computations. To implement the DIM model to the base text matching neural network, in the following subsections, we first propose our feature extraction method for text data. Then we describe the base text matching neural network as our DIM encoder. At last, we propose ourDIM Estimator and Discriminator (section 3.3) for MI maximization of text matching.
3.1 Feature Extraction for Varying-Length Text
First we generate feature maps, , for input
. In this step, we convert a text to multiple tensors of the same shape,, and generate fixed-sized feature maps for using the DIM method. What we need to consider is how to maintain as much useful information of the source text as possible in these feature maps. Therefore, according to different lengths situation of the text pair in the dataset, we propose two generation modes of feature maps separately for short text data and long text data, named word mode (TIM-W, Figure 2) and segment mode (TIM-S, Figure 3).
The TIM-W is mainly used for short texts to generate feature maps. We observe some universally-used short text datasets, including SNLI, SciTail, Quora and WikiQA, where texts are mostly in the tens of word scale, or are most in the tens of word range. For these cases, we propose the TIM-W to extract feature maps based on words and their embeddings to retain more semantic relevance information in a short text. We convert the short text into a word vector list, denoted as = (, ,…,), in which is a high-dimensional (e.g. 300 dimensions) vector calculated by a simple Word2Vec embeddings. The shape of the feature map (x) is fixed while DIM network is initialized in advance, where . The we group the
word vectors into feature maps. We pad the last feature map with zero vectors if the number of the last group of vectors is not big enough to fill all space of the last feature map. The TIM-W mode is shown in Figure2.
For a long text dataset, using a higher-dimensional word embedding to encode a long text will cause high space/time complexity while its texts already have much richer information than short texts. So we propose TIM-S to generate fix-size feature maps for a long text in our text matching model. First, we represent each word of a long text with a word index number defined in a relevant vocabulary: = (, ,…, ). Then we divide into segments with the same fixed length according to the preset segment size (), = (, …), where each contains word indexes and the last segment is padded with zeros at the end of it. Then we group the segments into feature maps, which shapes are of fixed size, x. The segment size and feature shape are set when initializing DIM network in advance. If the last group of segments is not enough to meet the size of the last feature map, we will pad zero-element segments for the last feature map. Finally, the long input text is represented as multiple fix-size local feature maps. The process is shown in Figure 3.
3.2 Text Matching Neural Layers
For the global feature, we take the whole text matching neural network as the DIM encoder and use the its output as the high-level representation. We adopt RE2 as the base of the text matching network, which achieved the state-of-the-art on four well-studied datasets across three different text matching tasks. RE2 leverages the previous aligned features (Residual vectors), point-wise features (Embedding vectors), and contextual features (Encoded vectors), to maintain useful information of texts in one text matching task when information passes through its network. The detailed architecture of RE2 is illustrated in Figure 4. An embedding layer firstly embeds discrete words. Three layers following the embedding layer are layers of encoding (CNN), alignment and fusion, which then process the sequences consecutively. The three layers are treated as one block in RE2. blocks are connected by an augmented version of residual connections. In the end, a pooling layer aggregates sequential representations into final vectors. More details can be referred in the original literature.
As the high-level global feature output of the DIM encoder, the final vectors are then passed into the DIM discriminator network and trained. Simultaneously, the final vectors are also passed to and processed by a prediction layer to give the final prediction of text matching. We keep RE2’s original network architecture (state of the art), then add DIM network on the base text matching network to help maximize useful information in the output representations used in the last step of matching prediction, which improves the performance to the text matching tasks and ensure the contrast experiments are reasonable.
3.3 MI Maximization for Text Matching
In our model, we define MI estimator and employ a discriminator to optimize the output representation () of the input text data () by simultaneously estimating and maximizing MI, (X)), in both sides of the comparison.
DIM Estimator. To estimate MI, an appropriate lower-bound for the KL-divergence is necessary. Before DIM, MINE proposed a lower-bound to the MI based on the Donsker-Varadhan representation (DV, Donsker & Varadhan, 1983) of the KL-divergence, shown as the following form:
where is a discriminator function modeled by a neural network with parameters . Based on the MINE estimator and the DIM local framework, we present our DIM estimator, maximizing the average estimated MI for text data and optimizing this local objective, described as following:
where denotes local features converted from input texts by feature extraction. is the learned high-level representation output of the pooling layer of the base text matching neural network RE2 with parameters . The denotes the parameters of a DIM discriminator function modeled by a neural network. The subscript denotes “local” for the DIM local framework. With our estimator, we next describe its DIM discriminator for MI maximization.
DIM Discriminator. With the high-level output from the text matching network and the feature maps extracted from the same input text, we then concatenate this global feature vector with its relative lower-level feature maps at every location, , with flattened in advance. Then our discriminator is formulated as:
while fake feature maps are generated by combining global feature vectors with local feature maps coming from different texts, :
With the ‘real’ and the ‘fake’ feature maps, we introduce the local DIM concat-and-convolve network architecture (), a convnet with two 512-unit hidden layers, as DIM discriminator for our text matching model. The process is shown in Figure 5. Then ‘real’ feature map and the ‘fake’ feature map pass through the discriminators and get the scores. The loss of MI for the input source text and target text of a text matching task can be calculated by: = +
. The overall loss function can be defined as:= + , where is the loss calculated by the base text matching neural network.
4.1 Experimental Setup
4.1.1 Benchmarks and Metrics
We evaluated our proposed TIM-W and TIM-S model on four well-studied NLP tasks and a news dataset, as follows:
Natural Language Inference. Stanford Natural Language Inference111https://nlp.stanford.edu/projects/snli
(SNLI) is a benchmark dataset for natural language inference. In this task, the two input sentences are asymmetrical, one as “premise” and the other as “hypothesis”. We follow the setup of SNLI’s original introduction in training and testing. Accuracy is used as the evaluation metric for this dataset.
Science Entailment. SciTail222http://data.allenai.org/scitail is an entailment classification dataset constructed from science questions and answers. This dataset contains only two types of labels, entailment and neutral. We use the original dataset partition. It contains 27k examples in total. 10k examples are with entailment labels and the remaining 17k are labeled as neutral. Accuracy is used as the evaluation metric for this dataset
Paraphrase Identification. This task is to decide whether one question is a paraphrase of the other between pairs of texts. We use the Quora dataset with 400k question pairs collected from the Quora website. The partition of dataset is the same as the one in .And accuracy is used as the evaluation metric.
Question Answering. For this task, we employ the WikiQA dataset333https://www.microsoft.com/en-us/research/publication/wikiqa-a-challenge-dataset-for-open-domain-question-answering, which is a retrieval-based question answering dataset based on Wikipedia .It contains questions and their candidate answers,with binary labels indicating whether a candidate sentence is a correct answer to the question it belongs to. Mean average precision (MAP) and mean reciprocal rank (MRR) are used as the evaluation metrics for this task.
News Articles Title Content Match. We employ the Harvard news dataset, News Articles444https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/GMFCTR, for matching task. It contains news articles and we separate the title of each article from its content and do the data augmentation by randomly combining pairs of a title and content of an article. Most contents of the news articles have 1000 to 5000 words. We report matching accuracy.
4.1.2 Baselines and Implementations
We implement our model based on  but train it on Nvidia 1080ti GPUs. Sentences in the dataset are all tokenized and converted to lower cases. We also perform a filter on meaningless symbols or emojis before embedding.The maximum sequence length is not limited. Word embeddings are initialized with 840B-300d GloVe word vectors () and fixed during training process.
4.2 Experimental Results
The experimental results are described below:
Natural Language Inference. Results on SNLI are shown in the first column of Table 1. The performance of previous methods are quite close and we slightly outperform the state-of-the-art. Our method can perform well in the language inference task without any tasks-pecific modifications.
Science Entailment. Results on SciTail dataset are shown in the second column of Table 1. Our method successfully improves the baseline model by 0.8% and achieves a result 0.1% over the state-of-the-art, which indicates our method is highly effective on this task.
Paraphrase Identification. Results on Quora are shown in third column of Table 1. Our method also lifts the accuracy of baseline model by 0.4% and achieves higher results than all previous methods.
Question Answering. Results on WikiQA are shown in the last column of Table 1. Small improvements are made by our methods on this IR task, which indicates our method also fits IR tasks well.
News Article Title Content Match. Results on harvard news dataset are shown in Table 2.
|TIM-W (ours)||88.9||TIM-W (ours)||86.8||TIM-W (ours)||89.6||TIM-W (ours)||0.7516||0.7685|
|TIM-S (ours)||88.3||TIM-S (ours)||86.2||TIM-S (ours)||87.8||TIM-S (ours)||0.7181||0.7387|
|TIM-S (ours): D=12, M=10||96.59|
|TIM-S (ours): D=20, M=10||95.83|
|TIM-S (ours): D=20, M=20||95.45|
|TIM-S (ours): D=6, M=10||95.11|
|TIM-S (ours): D=6, M=5||94.70|
In all, our proposed method achieves equal or even better performance on par with the state-of-the-art on four well-studied datasets across three different tasks.
Analysis of Results. TIM-W mode on SNLI, Quora, Scitail and WikiQA achieves better accuracy for feature extraction on the word level because of short texts. For feature extraction on the segment level for longer texts, TIM-S mode suits better according to the experiments on the News Article dataset. And without introducing high-dimension pretrained word embedding, TIM-S is significantly faster on the long texts than TIM-W.
Influence of and . is the shape size of the local feature in both TIM-W and TIM-S, and is the segment size only need to be set in TIM-S. First, for the TIM-W used in SNLI, Quora, Scitail and WikiQA, we tune from 1 to 3. Texts in the four datasets are relatively short and should not be greater than the word number of the short text. Otherwise a short text will only be converted to just one feature map, which will cause loss of structural information in the text. Second, in the experiments under TIM-S mode for the content field in the News dataset, we tune the segment size (words, ) and the shape () of the fixed-size feature maps. We set and =, which means each local feature map contains segments and each segment has word indexes. For the setting of and , when we enlarge both and , each feature map block will have more zeros padded so that it becomes more difficult to maximize the useful local information from sparse feature maps. But when both and are set to be small, the TIM-S mode actually becomes TIM-W mode, which is not suitable for long texts. This means when the shape of the feature map in TIM-S becomes smaller, more local structure information is lost in the MI maximization process. The influence of and is illustrated in Figure 2.
Case Study. Aligning tokens between two texts is a key stage of the baseline model and achieves remarkable improvements on text matching. But incorrect concentration on the text positions during finite number of alignment operations (3 times), may result in failure of predictions. For example, in a pair from WikiQA, “who is basketball star antoine walker” and “Antoine Devon Walker (born August 12, 1976) is an American former professional basketball player”, there is a middle name in the player’s name. And in another pair, “what day is st. patricks day” and “Saint Patrick’s Day or the Feast of Saint Patrick (the Day of the Festival of Patrick) is a cultural and religious holiday celebrated on 17 March”, the person’s name appears at multiple positions in one text. Compared to the baseline, MI maximization with powerful neural networks helps to model local semantics and improve text matching predictions more efficiently. Our model gets better prediction results on these cases. Meanwhile, for richer features can bring better MI estimation results, we will investigate better feature extraction methods with MI neural estimation for NLP tasks in future work.
In this paper, we propose a new neural architecture with deep mutual information estimation to learn more effective and high-quality text representations in text matching tasks. By maximizing the mutual information between each input and output pairs, our method retains more useful information in the learned high-level representations. Moreover, we split text into segments and treat these segments as local features. This helps preserve the complex structural information and solve the structure difficulty in text matching on varying-length texts. Then we leverage local mutual information maximization method to solve the information loss problem from complex text structures in text matching frameworks. The experiment results on various text matching tasks also demonstrate the effectiveness of our model.
-  (2018) Mutual information neural estimation. In ICML, J. Dy and A. Krause (Eds.), Vol. 80, pp. 531–540. External Links: Cited by: §1, §2.
-  (2017) Enhanced LSTM for natural language inference. In ACL, pp. 1657–1668. External Links: Cited by: Table 1.
-  (2018) Document similarity for texts of varying lengths via hidden topics. In ACL, pp. 2341–2351. External Links: Cited by: §2.
-  (2019) Learning deep representations by mutual information estimation and maximization. In ICLR, External Links: Cited by: §1, §1, §2.
-  (2018) SciTaiL: a textual entailment dataset from science question answering. In AAAI, Cited by: Table 1.
-  (2015) From word embeddings to document distances. In ICML, pp. 957–966. External Links: Cited by: §2.
-  (2014) Distributed representations of sentences and documents. In ICML, pp. 1188–1196. External Links: Cited by: §2.
-  (2018) Stochastic answer networks for natural language inference. ArXiv abs/1804.07888. Cited by: Table 1.
-  (2018) Improved text matching by enhancing mutual information. In AAAI, Cited by: §2.
-  (2013) Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, pp. 3111–3119. External Links: Cited by: §2.
-  (2016) Key-value memory networks for directly reading documents. In EMNLP, pp. 1400–1409. External Links: Cited by: Table 1.
-  (2016) Siamese recurrent architectures for learning sentence similarity. In AAAI, pp. 2786–2792. External Links: Cited by: §2.
A decomposable attention model for natural language inference. In EMNLP, pp. 2249–2255. External Links: Cited by: Table 1.
-  (2014) GloVe: global vectors for word representation. In EMNLP, pp. 1532–1543. External Links: Cited by: §4.1.2.
Enhancing variational autoencoders with mutual information neural estimation for text generation. In EMNLP-IJCNLP, pp. 4045–4055. External Links: Cited by: §1.
-  (2015) Learning to rank short text pairs with convolutional deep neural networks. In SIGIR, pp. 373–382. External Links: Cited by: §2.
-  (2017) Inter-weighted alignment network for sentence pair modeling. In EMNLP, pp. 1179–1189. External Links: Cited by: Table 1.
-  (2015) Improved semantic representations from tree-structured long short-term memory networks. In ACL, pp. 1556–1566. External Links: Cited by: §2.
-  (2018) Multiway attention networks for modeling sentence pairs. In IJCAI, pp. 4411–4417. External Links: Cited by: Table 1.
-  (2016) Improved representation learning for question answer matching. In ACL, pp. 464–473. External Links: Cited by: §2, §2.
-  (2018) Co-stack residual affinity networks with multi-level attention refinement for matching text sequences. In EMNLP, Cited by: Table 1.
-  (2018) Compare, compress and propagate: enhancing neural architectures with alignment factorization for natural language inference. In EMNLP, pp. 1565–1575. External Links: Cited by: Table 1.
-  (2018) Hermitian co-attention networks for text matching in asymmetrical domains. In IJCAI, pp. 4425–4431. External Links: Cited by: Table 1.
-  (2017) Neural paraphrase identification of questions with noisy pretraining. In SCLeM, pp. 142–147. External Links: Cited by: Table 1.
-  (2017) A compare-aggregate model for matching text sequences. In ICLR, External Links: Cited by: Table 1.
-  (2017) Bilateral multi-perspective matching for natural language sentences. In IJCAI-17, pp. 4144–4150. External Links: Cited by: §4.1.1, Table 1.
-  (2019) Learning disentangled representation for cross-modal retrieval with deep mutual information estimation. In MM, pp. 1712–1720. External Links: Cited by: §1.
-  (2019) Simple and effective text matching with richer alignment features. In ACL, Cited by: §1, §2, §3, §4.1.2, Table 1, Table 2.
-  (2016) ABCNN: attention-based convolutional neural network for modeling sentence pairs. ACL 4, pp. 259–272. External Links: Cited by: Table 1.