Answer retrieval is to find the most aligned answer from a large set of candidates given a question Ahmad et al. (2019); Abbasiyantaeb and Momtazi (2020). It has been paid increasing attention by the NLP and information retrieval community Yoon et al. (2019); Chang et al. (2020). Sentence-level answer retrieval approaches rely on learning vector representations (i.e., embeddings) of questions and answers from pairs of question-answer texts. The question-answer alignment and question/answer semantics are expected to be preserved in the representations. In other words, the question/answer embeddings must reflect their semantics in the texts of being aligned as pairs.
|Question (1): What three stadiums did the NFL decide between for the game?|
|Question (2): What three cities did the NFL consider for the game of Super Bowl 50?|
|… Question (17): How many sites did the NFL narrow down Super Bowl 50’s location to?|
|Answer: The league eventually narrowed the bids to three sites: New Orleans Mercedes-Benz Superdome, Miami Sun Life Stadium, and the San Francisco Bay Area’s Levi’s Stadium.|
One popular scheme “Dual-Encoders” (also known as “Siamese network” Triantafillou et al. (2017); Das et al. (2016)) has two separate encoders to generate question and answer embeddings and a predictor to match two embedding vectors Cer et al. (2018); Yang et al. (2019). Unfortunately, it has been shown difficult to train deep encoders with the weak signal of matching prediction Bowman et al. (2015). Then there has been growing interests in developing deep generative models such as variational auto-encoders (VAEs) and generative adversial networks (GANs) for learning text embeddings Xu et al. (2017); Xie and Ma (2019). As shown in Figure 1(b), the scheme of “Dual-VAEs” has two VAEs, one for question and the other for answer Shen et al. (2018). It used the tasks of generating reasonable question and answer texts from latent spaces for preserving semantics into the latent representations.
Although Dual-VAEs was trained jointly on question-to-question and answer-to-answer reconstruction, the question and answer embeddings can only preserve isolated semantics of themselves. In the model, the Q-A alignment and Q/A semantics were too separate to capture the aligned semantics (as we mentioned at the end of the first paragraph) between question and answer. Learning the alignment with the weak Q-A matching signal, though now based on generatable embeddings, can lead to confusing results, when (1) different questions have similar answers and (2) similar questions have different answers. Table 1 shows an examples in SQuAD: 17 different questions share the same sentence-level answer.
Our idea is that if aligned semantics were preserved, the embeddings of a question would be able to generate its answer, and the embeddings of an answer would be able to generate the corresponding question. In this work, we propose to cross variational auto-encoders, shown in Figure 1(c), by reconstructing answers from question embeddings and reconstructing questions from answer embeddings. Note that compared with Dual-VAEs, the encoders do not change but decoders work across the question and answer semantics.
Experiments show that our method improves MRR and R@1 over the state-of-the-art method by 1.06% and 2.44% on SQuAD, respectively. On a subset of the data where any answer has at least 10 different aligned questions, our method improves MRR and R@1 by 1.46% and 3.65%, respectively.
2 Related Work
Answer retrieval (AR) is defined as the answer of a candidate question is obtained by finding the most similar answer between multiple candidate answers Abbasiyantaeb and Momtazi (2020). While another popular task on SQuAD dataset is machine reading comprehension (MRC), which is introduced to ask the machine to answer questions based on one given context Liu et al. (2019). In this section, we review existing work related to answer retrieval and variational autoencoders.
It has been widely studied with information retrieval techniques and has received increasing attention in the recent years by considering deep neural network approaches. Recent works have proposed different deep neural models in text-based QA which compares two segments of texts and produces a similarity score. Document-level retrievalChen et al. (2017); Wu et al. (2018); Seo et al. (2018, 2019) has been studied on many public datasets including including SQuAD Rajpurkar et al. (2016), MsMarco Nguyen et al. (2016) and NQ Kwiatkowski et al. (2019) etc. ReQA proposed to investigate sentence-level retrieval and provided strong baselines over a reproducible construction of a retrieval evaluation set from the SQuAD data Ahmad et al. (2019). We also focus on sentence-level answer retrieval.
Variational Autoencoders. VAE consists of encoder and generator networks which encode a data example to a latent representation and generate samples from the latent space, respectively Kingma and Welling (2013)
. Recent advances in neural variational inference have manifested deep latent-variable models for natural language processing tasksBowman et al. (2016); Kingma et al. (2016); Hu et al. (2017a, b); Miao et al. (2016). The general idea is to map the sentence into a continuous latent variable, or code, via an inference network (encoder), and then use the generative network (decoder) to reconstruct the input sentence conditioned on samples from the latent code (via its posterior distribution). Recent work in cross-modal generation adopted cross alignment VAEs to jointly learn representative features from multiple modalities Liu et al. (2017); Shen et al. (2017); Schonfeld et al. (2019). DeConv-LVM Shen et al. (2018) and VAR-Siamese Deudon (2018) are most relevant to us, both of which adopt Dual-VAEs models (see Figure 1(b)) for two text sequence matching task. In our work, we propose a Cross-VAEs for questions and answers alignment to enhance QA matching performance.
3 Proposed Method
Problem Definition. Suppose we have a question set and an answer set . Each question and answer have only one sentence. Each question and answer can be represented as , where
is a binary variable indicating whetherand are aligned. Therefore, the solution of sentence-level retrieval task could be considered as a matching problem. Given a question and a list of answer candidates , our goal is to predict of each input question with each answer candidate .
3.1 Crossing Variational Autoencder
Learning cross-domain constructions under generative assumption is essentially learning the conditional distribution and where two continuous latent variables are independently sampled from and :
The question-answer pair matching can be represented as the conditional distribution from latent variables and :
Objectives. We denote and as question and answer encoders that infer the latent variable and from a given question answer pair , and and as two different decoders that generate corresponding question and answer and from latent variables and . Then, we have cross construction loss:
Variational Autoencoder Kingma and Welling (2013) imposes KL-divergence regularizer to align both posteriors and :
where , are all parameters to be optimized. Besides, we have question answer matching loss from as:
where is a matching function and are parameters to be optimized. Finally, we obtain the overall object function to be minimized:
where , and are introduced as hyper-parameters to control the importance of each task.
3.2 Model Implementation
We use Gated Recurrent Unit (GRU) as encoders to learn contextual words embeddingsCho et al. (2014). Question and answer embeddings are reduced by weighted sum through multiple hops self-attention Lin et al. (2017)
of GRU units and then fed into two linear transition to obtain mean and standard deviation asand .
Dual Decoders. We adopt another Gated Recurrent Unit (GRU) for generating token sequence conditioned on the latent variables and .
Our experiments were conducted on SQuAD 1.1 Rajpurkar et al. (2016). It has over 100,000 questions composed to be answerable by text from Wikipedia documents. Each question has one corresponding answer sentence extracted from the Wikipedia document. Since the test set is not publicly available, we partition the dataset into 79,554 (training) / 7,801 (dev) / 10,539 (test) objects.
InferSent Conneau et al. (2017). It is not explicitly designed for answer retrieval, but it produces results on semantic tasks without requiring additional fine tuning.
USE-QA Yang et al. (2019) . It is based on Universal Sentence Encoder Cer et al. (2018), but trained with multilingual QA retrieval and two other tasks: translation ranking and natural language inference. The training corpus contains over a billion question answer pairs from popular online forums and QA websites (e.g, Reddit).
QA-Lite. Like USE-QA, this model is also trained over online forum data based on transformer. The main differences are reduction in width and depth of model layers, and sub-word vocabulary size.
BERT Devlin et al. (2019) . BERT first concatenates the question and answer into a text sequence , then passes through a 12-layers BERT and takes the
vector as input to a binary classifier.
SenBERT Reimers and Gurevych (2019) . It consists of twin structured BERT-like encoders to represent question and answer sentence, and then applies a similarity measure at the top layer.
4.3 Experimental Settings
We initialize each word with a 768-dim BERT token embedding vector. If a word is not in the vocabulary, we use the average vector of its sub-word embedding vectors in the vocabulary. The number of hidden units in GRU encoder are all set as 768. All decoders are multi-layer perceptions (MLP) with one 768 units hidden layer. The latent embedding size is 512. The model is trained for 100 epochs by SGD using Adam optimizerKingma and Ba (2014). For the KL-divergence, we use an KL cost annealing scheme Bowman et al. (2016), which serves the purpose of letting the VAE learn useful representations before they are smoothed out. We increase the weight of the KL-divergence by a rate of
per epoch until it reaches 1. We set learning rate as 1e-5, and implemented on Pytorch.
We compare our proposed method cross variational autoencoder (Cross-VAEs) with dual-encoder model and dual variational autoencoder (Dual-VAEs). For fair comparisons, we all use GRU as encoder and decoder, and keep all other hyperparameters the same.
Evaluation Metrics. The models are evaluated on retrieving and ranking answers to questions using three metrics, mean reciprocal rank (MRR) and recall at K (R@K). R@K is the percentage of correct answers in topK out of all the relevant answers. MRR represents the average of the reciprocal ranks of results for a set of queries.
Comparing performance with baselines. As shown in Table 2, two BERT based models do not perform well, which indicates fune tuning BERT may not be a good choice for answer retrieval task due to unrelated pre-training tasks (e.g, masked language model). In contrast, using BERT token embedding can perform better in our retrieval task. Our proposed method outperforms all baseline methods. Comparing with USE-QA, our method improves MRR and R@1 by +1.06% and +2.44% on SQuAD, respectively. In addition, Dual variational autoencoder (Dual-VAEs) does not make much improvement on question answering retrieval task because it can only preserve isolated semantics of themselves. Our proposed crossing variational autoencoder (Cross-VAEs) could outperform dual-encoder model and dual variational autoencoder model, which improves MRR and R@1 by +1.23%/+0.81% and +0.90%/+0.59%, respectively.
Analyzing performance on sub-dataset. We extract a subset of SQuAD, in which any answer has at least eight different questions. As shown in Table 3, our proposed cross variational autoencoder (Cross-VAEs) could outperform baseline methods on the subset. Our method improves MRR and R@1 by +1.46% and +3.65% over USE-QA. Cross-VAEs significantly improve the performance when an answer has multiple aligned questions. Additionally, SSE of our method is smaller than that of USE-QA. Therefore, the questions of the same answer are closer in the latent space.
4.4 Case Study
Figures 2(a) and 2(b) visualize embeddings of 14 questions of the same answer. We observe that crossing variational autoencoders (CrossVAE) can better capture the aligned semantics between questions and answers, making latent representations of questions and answers more prominent. Figure 2(c) demonstrates two of example questions and corresponding answers produced by USE-QA and CrossVAEs. We observe that CrossVAEs can better distinguish similar answers even though they all share several same words with the question.
Given a candidate question, answer retrieval aims to find the most similar answer text between candidate answer texts. In this paper, We proposed to cross variational autoencoders by generating questions with aligned answers and generating answers with aligned questions. Experiments show that our method improves MRR and R@1 over the best baseline by 1.06% and 2.44% on SQuAD.
We thank Drs. Nicholas Fuller, Sinem Guven, and Ruchi Mahindru for their constructive comments and suggestions. This project was partially supported by National Science Foundation (NSF) IIS-1849816 and Notre Dame Global Gateway Faculty Research Award.
- Text-based question answering from information retrieval and deep neural network perspectives: a survey. arXiv preprint arXiv:2002.06612. Cited by: §1, §2.
- ReQA: an evaluation for end-to-end answer retrieval models. In Proceedings of the 2nd Workshop on Machine Reading for Question Answering, Cited by: §1, §2.
- A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, Cited by: §1.
- Generating sentences from a continuous space. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, Cited by: §2, §4.3.
- Universal sentence encoder. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, Cited by: §1, §4.2.
- Pre-training tasks for embedding-based large-scale retrieval. In Proceedings of 8th International Conference for Learning Representation (ICLR). Cited by: §1.
- Reading wikipedia to answer open-domain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Cited by: §2.
- Learning phrase representations using rnn encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Cited by: §3.2.
- Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, Cited by: §4.2.
- Together we stand: siamese networks for similar question retrieval. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Cited by: §1.
- Learning semantic similarity in a continuous space. In Advances in neural information processing systems (NeurIPS), pp. 986–997. Cited by: §2.
- BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Cited by: §4.2.
Toward controlled generation of text.
Proceedings of the 34th International Conference on Machine Learning-Volume 70, Cited by: §2.
- On unifying deep generative models. In Proceedings of 5th International Conference for Learning Representation (ICLR). Cited by: §2.
- Adam: a method for stochastic optimization. In Proceedings of 2nd International Conference for Learning Representation (ICLR). Cited by: §4.3.
- Auto-encoding variational bayes. In Proceedings of 1st International Conference for Learning Representation (ICLR). Cited by: §2, §3.1.
- Improved variational inference with inverse autoregressive flow. In Advances in neural information processing systems (NeurIPS), Cited by: §2.
- Natural questions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics. Cited by: §2.
- A structured self-attentive sentence embedding. In Proceedings of 5th International Conference for Learning Representation (ICLR). Cited by: §3.2.
Unsupervised image-to-image translation networks. In Advances in neural information processing systems (NeurIPS), Cited by: §2.
- Neural machine reading comprehension: methods and trends. Applied Sciences. Cited by: §2.
- Neural variational inference for text processing. In International conference on machine learning, Cited by: §2.
- MS marco: a human-generated machine reading comprehension dataset. Cited by: §2.
- SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Cited by: §2, §4.1.
- Sentence-bert: sentence embeddings using siamese bert-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Cited by: §4.2.
- Generalized zero-and few-shot learning via aligned variational autoencoders. In , Cited by: §2.
- Phrase-indexed question answering: a new challenge for scalable document comprehension. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Cited by: §2.
- Real-time open-domain question answering with dense-sparse phrase index. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Cited by: §2.
Deconvolutional latent-variable model for text sequence matching.
Thirty-Second AAAI Conference on Artificial Intelligence, Cited by: 1(b), §1, §2.
- Style transfer from non-parallel text by cross-alignment. In Advances in neural information processing systems (NeurIPS), Cited by: §2.
- Few-shot learning through an information retrieval lens. In Advances in neural information processing systems (NeurIPS), Cited by: §1.
- Word mover’s embedding: from word2vec to document embedding. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP), Cited by: §2.
- Dual-view variational autoencoders for semi-supervised text matching. In Proceedings of the 28th International Joint Conference on Artificial Intelligence, Cited by: §1.
- Variational autoencoder for semi-supervised text classification. In Thirty-First AAAI Conference on Artificial Intelligence, Cited by: §1.
- Multilingual universal sentence encoder for semantic retrieval. arXiv preprint arXiv:1907.04307. Cited by: 1(a), §1, §4.2.
- A compare-aggregate model with latent clustering for answer selection. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, Cited by: §1.