Sponsored search is an interplay of three entities. The advertisers provide business advertisements and bid on keywords to target their ads. The search engine provides the platform where the advertisers’ ads can be shown to the user along with organic results. The user submits queries to the search engine and interacts with ads.
Historically, search engine only provides exact match type between queries and keywords. In this scenario, an ad can only be shown when a user’s query exactly matches one of the keywords that the advertiser bids. This puts a great burden to advertisers, since they have to carefully select hundreds of thousands of relevant keywords for their business.
Modern sponsored search platform usually supplies advanced match type to release the advertisers from this heavy work of choosing keywords. In this scenario, keywords are no longer required to be the same as queries, but should be semantically relevant to queries. For its simplicity and flexibility, advanced match type has been becoming more and more popular among advertisers, and now it accounts for a large part in search engine’s revenue.
Under the condition of advanced match type, query keyword matching is implemented as a standard information retrieval process, where keyword candidates are retrieved from an inverted index structure, then a ¡query, keyword¿ relevance model (Hillard et al., 2010) would be used to get rid of low relevant keywords. As is common knowledge, one of the fundamental problems in information retrieval is the semantic gap between queries and documents. In sponsored search scenario, the doc’s role is embodied by the keyword. Most ad keywords are short texts, which increases ambiguity and makes the gap even more serious.
Most sponsored search systems use query rewriting technique (Malekian et al., 2008) to alleviate this problem, within which several query rewrites would be used as alternative queries to retrieve keywords. As is illustrated in Figure 1, the keyword retrieval process usually comprises three stages:
A query is rewrited into sub-queries
Each sub-query is submitted to a Boolean retrieval engine to get its corresponding candidate list , all of the candidates are merged together as
The candidate set would be filtered by a relevance judge model to get final keyword set .
A big disadvantage of this framework is the accumulation of errors. Each sub-module in this framework might have a trade-off between precision and recall, and also a trade-off between effect and latency performance. Following the retrieval path, these errors would be accumulated gradually, and resulting in a low precision and recall rate finally.
Monolingual statistical machine translation has been used as a typical method to generate query rewriting (Riezler and Liu, 2010; Gao et al., 2012; Jones et al., 2006). With the fast development of DNN, end-to-end neural machine translation (NMT) (Bahdanau et al., 2015) has achieved a translation performance comparable to the existing state-of-the-art phrase-based systems (Koehn et al., 2003). Recently, He et al. (He et al., 2016) applied a sequence-to-sequence LSTM architecture to rewriting model.
Compared with statistical machine translation (SMT), one great advantage of NMT is that the whole system can be easily and completely constructed by learning from data without human involvement. Another major advantage of NMT is that the gating mechanism (like LSTM (Hochreiter and Schmidhuber, 1997), GRU (Cho et al., 2014) et al.) and attention techniques (Bahdanau et al., 2015) were proved to be effective in modeling long-distance dependencies and complicated alignment relations in the translation process, which posed a serious challenge for SMT (Wang et al., 2017).
Inspired by the aforementioned works, we propose a new retrieval method named end-to-end Generative Retrieval Method to narrow the query keyword semantic gap, which uses NMT to directly generate keyword from query. As is illustrated in Figure 2, the EGRM is implemented as a supplement branch to the existing retrieval system. To address the error accumulation problem, the query rewriting and relevance judging phases have been skipped.
is a schematic diagram of the EGRM model structure. A standard encoder-decoder neural machine translation structure has been deployed, within which a query is encoded by a multi-layer residual Recurrent Neural Network (RNN) encoder into a list of hidden states, and then a multi-layer residual RNN decoder is used to decode the target keyword one token by one token based on these hidden states and the previously generated tokens. During inference, a beam search strategy is used to approximately generate topbest translations.
To carry out this idea in a real industrial environment is a challenging task.
The biggest challenge is the efficiency of NMT’s decoding. Standard beam search, which is only able to translate about ten words per second (Hu et al., 2015), can hardly meet the requirement of commercial systems. The average response time for a commercial sponsored search system is about 200ms.
Secondly, general machine translation is an open target domain problem, where there are no restrictions added to the generated sentence. However, decoding in sponsored search scenario is an constrained closed target domain problem, where only keywords committed by advertisers are permitted during the generation.
Thirdly, general machine translation focus on generating one best translation for a source input. In our scenario, we want the translation model to generate as much unretrieved keywords as possible. Here unretrieved keywords refer to the keywords that can not be retrieved by the current keyword retrieval system. Retrieval in sponsored search is like a link prediction problem in a bipartite graph, where queries and keywords are two kinds of nodes, and retrieval relationship makes the edges. The more new edges we establish, the more ads supply we can make for the downstream auction queue. As a supplement to the current retrieval system, the EGRM framework is encouraged to trigger more unretrieved keywords.
Our key contributions in this work are the following:
An end-to-end generative retrieval method is introduced in sponsored search engine, which skips query rewriting and relevance judge model. This framework has been successfully implemented in Baidu’s commercial search engine, which has contributed a revenue improvement of more than 10%. To our knowledge, this is the first published job of applying NMT as a generative retrieval model into sponsored search engine. We hope this would shed light on further design of sponsored retrieval system and NMT’s application in industry.
A Trie-based pruning technique is introduced into the beam search, which greatly solved the constrained target domain problem.
Self normalization accompanied with Trie-based dynamic pruning dramatically reduced the decoding time, which yields a speedup of more than 20 times.
We carefully selects the organic log results to encourage the NMT to generate more unretrieved keywords.
2. Related Work
Machine translation is a popular way to alleviate the semantic gap in the NLP domain. With parallel corpus, machine translation can learn the underlying word alignment between target words and source words. If we use monolingual parallel data, semantic synonymy can be detected. Basically, there are two kinds of applications of machine translation in information retrieval. The first one uses machine translation as a discriminative model to evaluate ¡query, doc¿ relevance. Given a query and a document
, the translation probabilityor was used as a feature to boost the calculation of query document relevance (Yin et al., 2016; Gao et al., 2010). Hillard et al. (2010) applied this idea to calculating the commercial query ad relevance. The second one uses machine translation as a generative model to directly generate relevant candidates. This idea has been widely used in query rewrite. Riezler and Liu (2010); Gao et al. (2012); Jones et al. (2006) treated query rewrite as a statistical machine translation problem with monolingual training data. Recently, He et al. (2016)
proposed a sequence to sequence deep learning framework to study the query rewrite.
The most related work to ours is the paper recently published by Lee et al. (2018), which used conditional GAN to generate keywords from queries. There are several critical points that make our work different from theirs:
The target domain in their translation setting is not closed. The generated sentence might not be a valid keyword.
Unlike their approaches, we do not include commercial click log in our training data. This allows the NMT to generate more words not covered by the existing system.
Our work concentrates on addressing the latency impact of deploying the generative model into the real commercial system. Nevertheless, they showed no experiment results in the industry environment.
Although NMT gives us a nice and simple end-to-end way to deploy a state-of-the-art machine translation system, its decoding efficiency is still challenging. The standard beam search algorithm implemented by Bahdanau et al. (2015) reduced the search space from exponential size to polynomial size, and is able to translate about ten words per second . However, this speed is still far from meeting our requirement of commercial online retrieval systems. Hu et al. (2015) built a priority queue to further reduce the search space. And they also introduced a constrained softmax operation which uses phrase based translation system to generate the constrained word candidates. Since lots of unnecessary hypothesis are removed, the computational efficiency is greatly improved.
In following formulas, we use bold lower case to denote vectors(e.g.), capital letters to represent sequences(e.g. ), squiggle letters to represent set(e.g. ) and lower case to represent individual tokens in a sequence (e.g. ), to represent the token sequence , where is a special beginning of sentence symbol that is prepended to every target keyword.
Let be a ¡query, keyword¿ pair, where is the sequence of tokens of source query , and is the sequence of tokens in the target keyword . From the probabilistic perspective, machine translation is equivalent to maximizing the log likelihood of the conditional probability of sequence given a source query , i.e., , which can be decomposed into factors:
Our model follows the common sequence to sequence learning encoder-decoder framework (Sutskever et al., 2014) with attention (Bahdanau et al., 2015). Under this framework, an encoder reads the input query and encode its meaning into a list of hidden vectors:
where is a hidden state at time . In our experiment, the encoder is mainly implemented by RNN:
And the decoder is trained to predict the probability of next token given the hidden states and all the previously predicted words
During inference, target tokens would be decoded one by one based on this distribution, until a special end of sentence symbol(¡e¿) is generated.
In order to focus on different parts of the source query during decoding, an attention mechanism (Bahdanau et al., 2015) is introduced to connect the decoder hidden states and encoder hidden states. Let be the decoder output from the last decoding time step , be the attention context for the current time step , which is calculated according to the following formulas:
where could be implemented as dot product or feed forward network and is the hidden state vector at time step .
The RNN decoding phase is computed as follows:
where is the unnormalized energy score of choosing to be .
4. Selecting training data: Difference Oriented
As a complementary branch to the main retrieval system, linking underlying unretrieved relevant query keyword together is our major concern. We hope the most keywords generated by EGRM are unretrieved ones, especially considering that the decoding phase takes a lot of time.
Click logs are used as parallel corpus for training the NMT. Typically, there are two kinds of click logs in commercial search engine, the organic click log and sponsored ads click log. Sponsored ads log provides commercial (query-keyword) click pairs which are also the current retrieval system’s feedback. Using the feedback looped log as training data would not generate much difference, since maximum likelihood estimation would make the topkeywords to be the same as those in the training data. Organic click log provides us with natural (query-title) click pairs. The vast difference between organic and paid search results makes it possible for the NMT to generate more keywords different from the current retrieved ones.
5. Decoding efficiently into a closed set
One challenge in applying machine translation to keyword retrieval task is that our target space is a restricted fixed set of submitted keywords, whereas in general translation, the target space is unconstrained, which means any possible sentences might be generated.
There are several possible methods to mitigate this problem. Firstly, we might generate as many candidates as possible, then pick out the real keywords. However, this is not applicable in a low latency industrial environment, since decoding much candidates would cost a lot of time.
Secondly, we might use ¡query, keyword¿ data from commercial click log as our training data. Translation is essentially a conditional language model. A language model trained with ¡query, keyword¿ data is supposed to guide the decoder to generate real keywords. However, as is pointed out in the last section, this would not induce much difference to our current retrieval system.
In this paper, we devise a novel pruning technique in beam search called Trie-based pruning to fix this problem.
A prefix tree for the constrained keyword set is built before the decoding phase. Fist of all, each keyword in is tokenized. Then we use these token lists to build a prefix tree keyed by tokens as is illustrated in Figure 4.
5.1. Trie-based pruning within a beam search
Suppose we are doing a beam search of size , at the th decoding phase, tokens have been generated, hypotheses are conserved in the following set . For each hypothesis , the model would inference a conditional token probability of . With a prefix tree , we can get all the valid suffix tokens set directly following the trie path , then only the valid suffix in would be kept, other tokens would be pruned away. Figure 5 shows the whole pruing process. With the Trie-based pruning technique, all the generated sentences are valid keywords, which greatly improves efficiency.
Another important feature of using Trie-based pruning is that: only a small limited number of tokens in the large vocabulary need to be calculated. In fact, Table 1 shows the average suffix token numbers for each layer of the prefix tree, which is built for 295 billion keywords. Going from the top to bottom, the suffix token number decreases quickly, which makes it possible to gain a great speedup with Trie-based pruning technique.
It is well known that one serious performance bottleneck at inference stage is the computation of the denominator of the softmax, i.e. in equation 6, as it involves summarization over the entire output vocabulary space. Various approaches have been proposed to address this problem (Ruder, 2016). Inspired by the balanced binary tree, Morin and Bengio (2005)
proposed to replace the flat softmax layer with a hierarchical tree. RecentlyGrave et al. (2017) came up with adaptive softmax for efficient computation on GPU, which handles frequent words and infrequent words separately with different hidden state sizes. Another kind of approach is the sampling-based. Bengio et al. (2003) proposed Important Sampling to reduce the computation. Mikolov et al. (2013)
used Negative Sampling to address the problem. More sophisticated methods like Noise Contrastive Estimation(Mnih and Teh, 2012) are also available.
Following Devlin et al. (2014)’s work, we use the self normalizing trick to speed up the decoding. To be specific, during training, an explicit regularization loss is added to the original likelihood loss in Equation 1 to encourage the softmax normalizer to be as close to 1 as possible.
When decoding with self-normalized model, the costly step for calculating the denominator is avoided, we only have to compute the numerator .
Furthermore, combined with a prefix tree, a small number of numerators need to be calculated. As a matter of fact, we can just predict the valid suffix words conditioned on the current output words path, which would save much more time.
5.3. Drop inferior hypotheses on the fly
Another useful trick in our implementation is to remove inferior hypotheses on the fly. Generally, a likelihood threshold is set up to filter the final generated keywords at the end of decoding. This threshold can also be used in the internal process of decoding. As we decode a new token based on the current hypothesis, the likelihood of hypotheses would be multiplied by anther probability factor, therefore the full likelihood decreases as decoding proceeds. Based on this consideration, if the current hypothesis’s likelihood is lower than the given threshold, we would not expand it out later. This trick would make more qualified keywords(with a likelihood above the threshold) in the final generated hypothesis set. Combined with the Trie-based pruning, the total decoding time would also be decreased.
5.4. Full Algorithm
The full algorithm is shown as in Algorithm 1.
5.5. An online-offline mixing architecture
Large commercial search engines report ever-growing query volumes, leading to tremendous computation load on sponsored ads. It is well known that these search queries are highly skewed and exhibit a power law distribution(Spink et al., 2001; Petersen et al., 2016). That is, in a fixed time period, the most popular queries compose the head and torso of the curve, in other words, approximately 20% of queries occupies 80% of the query volume. This property has motivated the cache design upon search results. Inspired by this idea, we designed a online-offline mixing architecture in Figure 6.
Under this framework, query volume is divided into two parts: frequent queries and infrequent queries. For frequent queries, their generated keywords are computed completely offline, where enormous computing resources can be used. In our experiment, we deployed a complex model with a 4 layer LSTM encoder and a 4 layer LSTM decoder; for infrequent queries, keywords are generated online, where latency is strictly restricted. In this scenario, we implemented a simple model which is a single layer GRU Gated RNN encoder and a single layer GRU Gated RNN decoder. This mixed framework helps us to save more than 70% cpu resources.
In this section, we conduct experiments to show the performance of our proposed EGRM framework.
6.1. Training Data Set
As mentioned in section 4, in order to encourage the NMT to generate more unretrieved keywords, we include organic user click log instead of commercial user click log in out training data, where the latter one is the feedback of current commercial retrieval system. 749 million query-title pairs are sampled from Baidu’s one month user click log. Titles are simply prepossessed to trim the last domain name related part. Queries and titles are tokenized, and top frequent tokens are kept to form the vocabulary. Other words are all mapped into the same UNK token. Our vocabulary size is 42,000.
Table 2 shows some basic statistics of the data. There are 3.5 tokens in query, 6.5 tokens in title and 4.5 tokens in keyword on average. The prefix tree is built for 295 million advanced match type keywords.
6.2. Implementation Details
We use Adam (Kingma and Ba, 2015) with Xavier weight initialization (Glorot and Bengio, 2010) as the optimizer to perform Stochastic Gradient Ascent(SGA). The initial learning rate is set to be and the mini-batch size is 128. The hidden state vectors’ dimension is 512.
The offline model is implemented with a four layer LSTM encoder and a four layer LSTM decoder with attention. And the online model is implemented as a one layer GRU Gated RNN and a one layer GRU gated decoder with attention. Self-normalization and Trie based dynamic pruning have been applied in both online and offline situations. We use paddlepaddle 111http://www.paddlepaddle.org/ as the DNN training and inference tool. The Trie-based pruning strategy is fully realized with C++ language.
6.3. Decoding Efficiency
We describe our experimental settings as follows. Baseline is the typical sequence to sequence GRU Gated RNN with a one layer encoder and a one layer decoder, and the decoding is realized with standard beam search. ’SN’ means training with self-normalization and ’TP’ means decoding with Trie-based pruning. ’DropOTF’ refers to the strategy of dropping inferior hypotheses on-the-fly.
All experiments are conducted on our EGRM server. 10,000 queries are randomly sampled from Baidu’s commercial engine log and used as input to the EGRM system.
Table 3 shows the decoding time preformed with different strategies and different beam sizes. As is seen from this table, ’SN + TP’ strategy greatly reduces the decoding time, reaching a speedup of nearly 10 times. Combined with ’DropOTF’, the decoding time can be further decreased by nearly one half.
6.4. Validity of Generated Hypotheses without Trie Pruning
The following experiment shows the necessity of using Trie based Pruning to decoding into a closed set. As mentioned in section 5, using Trie-based Pruning can garantee that all generated sentences are valid keywords. Figure 7 shows that when Trie-based pruning is removed, only a small number of the generated sentences are actual keywords. As the beam size increases, the amount of valid keywords increased quite slowly. To be specific, when the beam size is set to be as large as 300, only 8% of the results are valid keywords.
6.5. Relevance Assessment
Query-keyword pairs are sampled from online A/B test experiment, with 800 pairs sampled on each individual side. These pairs are sent to professional human judges for three grade labels: good, fair and bad. Table 4 shows the A/B judgment for generated keywords and baseline results. For commercial privacy concern, we only show the relative improvements based on the current system. As is shown in Table 4, the bad case proportion has dropped by -20.7% compared with the existing system’s, and the good case proportion has increased by 6.6%. This demonstrated that: under the condition of a great speedup, our EGRM system can still generate high quality keywords.
6.6. Online Evaluation
We also conduct online experiment for our EGRM system with real traffic. We use two metrics to evaluate the performance of our retrieval framework.
CTR denotes the average click ratio received by the search engine, which can be formalized as (one search means one submit of a query).
CPM denotes revenue received by search engine for 1000 searches, which can be formalized as .
As is shown in Table 5, the EGRM system has contributed to a CPM growth of 13.8% and a CTR growth of 15.4%. These metrics demonstrate that our EGRM branch has a better semantic understanding, and it does create a significant mount of new links for the underlying relevant query-keyword pairs.
In this paper, we have proposed a novel generative retrieval method for sponsored search engine. This method is fully end-to-end, without using query rewriting and relevance model. To make the decoded sentences limited within a closed domain, a Trie-based pruning mechanism has been introduced. To meet the real-time interaction demand of sponsored ad system, we have introduced self-normalization training technique coupled with dynamic Trie-based pruning. Experiments have demonstrated that our model can reduce the generating time to one twentieth without degrading the relevance quality. In addition, training data has been carefully selected to encourage the NMT to generate unretrieved keywords. Further, taking advantage of the power law distribution of queries, a mixed online-offline architecture has been constructed to save the CPU resources.
8. Future Works
We believe that decoding into a constrained domain is not a specific problem only suited for keyword retrieval. For example, task-oriented dialogue systems (Gao et al., 2018) might be required to generate or retrieve answers within a closed set, e.g. music name or lyrics. For the purpose of safe search, we might also want to limit the generation of certain phases and the prefix tree trick can help filter them on-the-fly.
To further improve the decoding efficiency, we could build several prefixed trees. When a query is submitted, a trade classifier would predict its trade, then a keyword prefix tree in the same trade is chosen. Finally, decoding would be restricted on this prefix tree. Since trades provide a natural boundary to link query and keywords. Queries and keywords should not be linked across trades.
- Bahdanau et al. (2015) Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. International Conference on Learning Representations (ICLR) (2015).
- Bengio et al. (2003) Yoshua Bengio, Jean-Sébastien Senécal, et al. 2003. Quick Training of Probabilistic Neural Nets by Importance Sampling.. In AISTATS. 1–9.
- Cho et al. (2014) Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078 (2014).
- Devlin et al. (2014) Jacob Devlin, Rabih Zbib, Zhongqiang Huang, Thomas Lamar, Richard M. Schwartz, and John Makhoul. 2014. Fast and Robust Neural Network Joint Models for Statistical Machine Translation. (2014).
- Gao et al. (2018) Jianfeng Gao, Michel Galley, and Lihong Li. 2018. Neural approaches to conversational ai. In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval. ACM, 1371–1374.
- Gao et al. (2010) Jianfeng Gao, Xiaodong He, and Jian-Yun Nie. 2010. Clickthrough-Based Translation Models for Web Search: from Word Models to Phrase Models.
et al. (2012)
Jianfeng Gao, Xiaodong
He, Shasha Xie, and Alnur Ali.
Learning Lexicon Models from Search Logs for Query Expansion. In
Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning(EMNLP-CoNLL ’12). Association for Computational Linguistics, Stroudsburg, PA, USA, 666–676.
Glorot and Bengio (2010)
Xavier Glorot and Yoshua
Understanding the difficulty of training deep
feedforward neural networks. In
Proceedings of the thirteenth international conference on artificial intelligence and statistics. 249–256.
Grave et al. (2017)
Edouard Grave, Armand
Joulin, Moustapha Cissé, Hervé
Jégou, et al. 2017.
Efficient softmax approximation for GPUs. In
Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR. org, 1302–1310.
- He et al. (2016) Yunlong He, Jiliang Tang, Hua Ouyang, Changsung Kang, Dawei Yin, and Yi Chang. 2016. Learning to Rewrite Queries. In Proceedings of the 25th ACM International on Conference on Information and Knowledge Management (CIKM ’16). ACM, New York, NY, USA, 1443–1452.
- Hillard et al. (2010) Dustin Hillard, Stefan Schroedl, Eren Manavoglu, Hema Raghavan, and Chirs Leggetter. 2010. Improving Ad Relevance in Sponsored Search. In Proceedings of the Third ACM International Conference on Web Search and Data Mining (WSDM ’10). ACM, New York, NY, USA, 361–370.
- Hochreiter and Schmidhuber (1997) Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation 9, 8 (1997), 1735–1780.
- Hu et al. (2015) Xiaoguang Hu, Wei Li, Xiang Lan, Hua Wu, and Haifeng Wang. 2015. Improved beam search with constrained softmax for NMT. Proceedings of MT Summit XV (2015), 297.
- Jones et al. (2006) Rosie Jones, Benjamin Rey, Omid Madani, and Wiley Greiner. 2006. Generating Query Substitutions. In Proceedings of the 15th International Conference on World Wide Web (WWW ’06). ACM, New York, NY, USA, 387–396.
- Kingma and Ba (2015) Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. International Conference on Learning Representations (ICLR) (2015).
- Koehn et al. (2003) Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology-Volume 1. Association for Computational Linguistics, 48–54.
- Lee et al. (2018) Mu-Chu Lee, Bin Gao, and Ruofei Zhang. 2018. Rare Query Expansion Through Generative Adversarial Networks in Search Advertising. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD ’18). ACM, New York, NY, USA, 500–508.
- Malekian et al. (2008) Azarakhsh Malekian, Chi-Chao Chang, Ravi Kumar, and Grant Wang. 2008. Optimizing Query Rewrites for Keyword-based Advertising. In Proceedings of the 9th ACM Conference on Electronic Commerce (EC ’08). ACM, New York, NY, USA, 10–19.
- Mikolov et al. (2013) Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems. 3111–3119.
- Mnih and Teh (2012) Andriy Mnih and Yee Whye Teh. 2012. A fast and simple algorithm for training neural probabilistic language models. arXiv preprint arXiv:1206.6426 (2012).
- Morin and Bengio (2005) Frederic Morin and Yoshua Bengio. 2005. Hierarchical probabilistic neural network language model.. In Aistats, Vol. 5. Citeseer, 246–252.
- Petersen et al. (2016) Casper Petersen, Jakob Grue Simonsen, and Christina Lioma. 2016. Power Law Distributions in Information Retrieval. ACM Trans. Inf. Syst. 34, 2, Article 8 (Feb. 2016), 37 pages.
- Riezler and Liu (2010) Stefan Riezler and Yi Liu. 2010. Query Rewriting Using Monolingual Statistical Machine Translation. Comput. Linguist. 36, 3 (Sept. 2010), 569–582.
- Ruder (2016) Sebastian Ruder. 2016. On word embeddings - Part 2: Approximating the Softmax. In http://ruder.io/word-embeddings-softmax.
- Spink et al. (2001) Amanda Spink, Dietmar Wolfram, Major B. J. Jansen, and Tefko Saracevic. 2001. Searching the web: The public and their queries. Journal of the American Society for Information Science and Technology 52, 3 (2001), 226–234.
- Sutskever et al. (2014) Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to Sequence Learning with Neural Networks. In Advances in Neural Information Processing Systems 27, Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger (Eds.). Curran Associates, Inc., 3104–3112.
- Wang et al. (2017) Xing Wang, Zhengdong Lu, Zhaopeng Tu, Hang Li, Deyi Xiong, and Min Zhang. 2017. Neural Machine Translation Advised by Statistical Machine Translation.. In AAAI. 3330–3336.
- Yin et al. (2016) Dawei Yin, Yuening Hu, Jiliang Tang, Tim Daly, Mianwei Zhou, Hua Ouyang, Jianhui Chen, Changsung Kang, Hongbo Deng, Chikashi Nobata, Jean-Marc Langlois, and Yi Chang. 2016. Ranking Relevance in Yahoo Search. (2016), 323–332.