Corpus-Level End-to-End Exploration for Interactive Systems

11/23/2019 ∙ by Zhiwen Tang, et al. ∙ Georgetown University 0

A core interest in building Artificial Intelligence (AI) agents is to let them interact with and assist humans. One example is Dynamic Search (DS), which models the process that a human works with a search engine agent to accomplish a complex and goal-oriented task. Early DS agents using Reinforcement Learning (RL) have only achieved limited success for (1) their lack of direct control over which documents to return and (2) the difficulty to recover from wrong search trajectories. In this paper, we present a novel corpus-level end-to-end exploration (CE3) method to address these issues. In our method, an entire text corpus is compressed into a global low-dimensional representation, which enables the agent to gain access to the full state and action spaces, including the under-explored areas. We also propose a new form of retrieval function, whose linear approximation allows end-to-end manipulation of documents. Experiments on the Text REtrieval Conference (TREC) Dynamic Domain (DD) Track show that CE3 outperforms the state-of-the-art DS systems.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Retrieval-based interactive systems, including multi-turn Question Answering (QA) [25], dialogue systems [31], and dynamic search systems [39], study the interaction between a human user and an intelligent agent when they work together to accomplish a goal-oriented task. Reinforcement Learning (RL) becomes a natural solution to these interactive systems [39, 16, 11] for its emphasis on adaptation and exploration. Prior work on this topic has investigated bandits-based [16], value-based [19], and policy-based [11] RL methods. In these approaches, oftentimes, a repository of documents (or knowledge) and inputs from a human user, are treated as the learning environment for the AI agent; and the agent’s actions usually take two steps – first, reformulating queries (or questions) based on user responses; second, retrieving relevant information to fulfill those queries via some off-the-shelf retrieval tools. Such a pipeline is a convenient use of existing Information Retrieval (IR) techniques; however, it comes with a few drawbacks.

First, most existing retrieval functions are optimized over precision at top ranks, which is demanded by the limited cognitive load a human user could afford when examining the results. This bias is engraved in all ready-to-use retrieval tools. The consequence is that results that are good but not as optimal would have little chance to show up. It might be ideal when there is only one run of retrieval, such as in single-turn QA or ad-hoc document retrieval. In multi-turn interactions, however, early triage of lower-ranked documents would lead to long-term loss that can not be easily recovered. Classic works on exploratory search [37] and information seeking [21] named this phenomenon “berry picking” – which depicts a tortuous search trajectory where only very limited useful information can be obtained at each single step because the search space is so restricted by the top results. This makes the RL agent’s learning very challenging because the agent is not able to “explicitly consider the whole problem of a goal-oriented” process [34].

Second, widely-used retrieval functions, including TF-IDF [29] and BM25 [27], are non-differentiable. Common functions to reformulate queries [12] are non-differentiable, too. These non-differentiable functions prevent a gradient-based RL method from updating its gradient through; thus a user’s feedback would not have real control over which documents to return. Consequently, the retrieval results could look random. Nonetheless, these functions remain quite popular due to their readiness.

In this paper, using dynamic search as an illustrating example, we propose a different solution to retrieval-based interactive agents. In our corpus-level end-to-end exploration algorithm (CE3), at each time step, we compress a text corpus into a single global representation and used to support the agent’s full exploration in the state and action spaces. In addition, we propose a novel differentiable retrieval function that allows an RL agent to directly manipulate documents. Experiments on the Text REtrieval Conference (TREC) Dynamic Domain 2017 Tracks demonstrate that our method significantly outperforms previous DS approaches. It also shows that our method is able to quickly adjust the search trajectories and recover from losses in early interactions. Given the fundamental issues it addresses, we believe CE3’s success can be extended to other interactive AI systems if they access information using retrieval functions.

2 Related Work

The work closest to ours is perhaps KB-InfoBot [8]. It is a dialogue system to find movies from a large movie knowledge base (KB). Similar to us, KB-InfoBot used a global representation for all

movies in its database to represent the states. To do so, it estimated a global distribution over the entire set of movie entities, conditioned on user utterances. The distribution was fed into a deep neural network to learn the agent’s action. Also similar to us, KB-InfoBot used a differentiable lookup function to support end-to-end manipulation of data entities. The two works differ in that their dialogue agent ran on a structured database and completed a task by iteratively filling the missing slots, while ours is for unstructured free text and accomplishes a task by iteratively retrieving documents that is relevant to the search task.

Knowing a global model that oversees the entire text collection has shown to be beneficial to retrieval in conventional IR research. For instance, liu2004cluster proposed to develop corpus-level clusters by K-means and then used them to smooth out multinomial language models. wei2006lda also used Latent Dirichlet Allocation (LDA) to obtain global topic hierarchies to improve retrieval performance. In this paper, we encode the corpus and the user’s search history for a global state representation.

Another related area to our work is dimension reduction. Many breakthroughs in neural models for Natural Language Processing (NLP) are built upon word2vec

[22] and its derivation doc2vec [15]

. Doc2vec is able to transform a high-dimensional discrete text representation into low-dimensional continuous vectors. Unfortunately, however, doc2vec cannot solve a problem known as

crowding [5]

. It refers to the situation where multiple high-dimensional data points are collapsed into one after dimension reduction and two data points belonging to different classes are then inseparable. In our case, each data point represents either a relevant or irrelevant document. We choose to use the t-Distributed Stochastic Neighbor Embedding (t-SNE) method

[20]

. It was used to support data visualization for high-dimensional images

[20], network parameters [23] and word vectors [17]. By assuming a t-distribution for the post-reduction distribution, t-SNE provides more space to scatter data points that were supposed to be collapsed and the dimensions can be reduced from thousands to as low as 2 or 3. Our experiments (Section 5) shows that t-SNE outperforms doc2vec for us.

3 Dynamic Search Background

Dynamic search (DS) systems are multi-turn interactive agents that assist human users for goal-oriented information seeking [39]. DS shares similar traits with its sister AI applications such as task-oriented dialogue systems and multi-turn QA systems. These traits include (1) a goal-oriented task and (2) the interactions with a human user. They exhibit different forms of interactions, though. In DS, the form of interaction is querying and retrieving documents. In dialogue systems, it is generating natural language responses. In multi-turn QA, it is questioning and finding answers.

DS is backed up by a long line of prior research in information science, library science, and information retrieval (IR). It is originated from Information Seeking (IS) [3]. Most IS research has focused on studying user behaviors [7]. Luo:2014:WSD:2600428.2609629 simplified IS into DS by separating the modeling on the system side from that on the user side and emphasized on the first. In DS, a human user either becomes part of the environment [18] or another agent who also interacts with the environment.

RL offers natural solutions to DS. The RL agent, a.k.a. the search engine, observes state at time () from the environment (the user and the document collection) and takes actions () to retrieve documents to show to the user. An immediate reward is encapsulated in the user’s feedback, which expresses how much the retrieved documents would satisfy the user’s informational need. The retrieved documents could also change the states and transition from the old state to a new one:

. This process may continue for many iterations until the search task is accomplished or the user decides to abandon it. Model-based RL approaches, such as Markov Decision Processes (MDPs) and Partially Observable Markov Decision Processes (POMDPs), and policy-based RL approaches have been explored in the past

[39].

The Dynamic Domain (DD) Tracks held at the Text REtrieval Conference (TREC) from 2015 to 2017 [40] are campaigns that evaluate DS systems. In the DD Tracks, human users are replaced with a simulator to provide feedback to the DS agents. The simulator’s feedback includes ratings to documents returned by the agent and which passages in those documents are relevant. It was created based on ground truth assessment done by third-party human annotators.

Dozens of teams participated in the DD Tracks. Their methods ranged from results diversification [24], relevance feedback [4]

, imitation learning

[38], to the focus of this paper, reinforcement learning [35, 1].

4 The Approach

Most RL-based DS approaches can be summarized by a general formulation. It takes two steps. First, a new, temporary query is generated by the RL agent’s policy :

(1)

where is the action generated by policy for state at time and is parameterized by . We call the query reformulation function, which constructs the new query based on

and heuristics

. Note that does not depend on . is a non-differentiable function.

Second, the newly formulated query is sent to

(2)

to obtain documents that are, hopefully, relevant to . We call the retrieval function. It is usually an existing retrieval method, such as BM25 or Language Modelling, which is non-differentiable. returns a score to quantify the relevance between and (a document) . is parameterized by , which is a variable of the retrieval method and independent of .

Overall, at each search iteration , the RL agent assigns a ranking score to the document :

(3)

and then all documents are ranked and retrieved based on this scoring.

The first issue of this formulation is that the retrieval function only finds the top relevant documents for query . At any time, the agent is not aware of the global picture of the state space. This is different from how an RL agent is treated in AI, where the agent is always aware of the global status of the environment, e.g. AlphaGo knows the game board. This inadequate knowledge of the global picture hampers the RL agent to make longer-term plans for better decision-making.

The second issue in this formulation is that neither nor is differentiable. This prevents a gradient-based RL method from correctly updating its gradient. The RL agent is thus unable to effectively adjust the retrieval results based on user feedback. In addition, folding together multiple non-differentiable functions makes it very difficult to diagnose an failed action, which could result from a bad policy , a sloppy , or an ineffective .

In this paper, we propose to convert and compress a collection of documents into a global representation. We also introduce a novel differentiable ranking function that can be easily incorporated into a gradient-based RL method [33] to perform dynamic search.

Algorithm Overview

Figure 1: System architecture.

Our framework is based on a state-of-the-art Monte Carlo policy-gradient method, Proximal Policy Optimization (PPO) [33]. The RL agent consists of two networks, a value network and a policy network. Given the current state , the policy network outputs action and the value network outputs state-value

. Both networks are composed of a few layers of Convolutional Neural Networks (CNNs) and a Multi-Layer Perceptron (MLP). They share the same network structure but use different parameter settings. Figure

1 illustrates the system architecture.

Initialize ;
for iteration = 1, 2, … do
       for t=1,2, … , T do
             read in the global representation ;
             sample action ;
             for i=1,2, … , C do
                   estimate a relevance score for document : (Eq. 13)
             end for
            rank the documents by and return the top-ranked documents ;
             compute based on Eq. 14;
             mark returned documents as visited, generate next state ;
             Compute advantage ;
            
       end for
      Optimize  (Eq. 4) w.r.t. ;
end for
Algorithm 1 CE3.

Algorithm 1 describes the proposed CE3 method. It starts by sampling actions by the RL agent. The documents are then ranked by their estimated relevance scores calculated based on the action vectors. The top-ranked ones are shown to the user. The user then examines the documents and submits feedback, which is used to derive the immediate reward . As a Monte Carlo algorithm, CE3 generates search trajectories by sampling actions based on the current policy. Given enough trajectory samples, the following objective function is optimized w.r.t. the network parameter :

(4)

where is the weight for the value network and is the coefficient for the policy’s entropy, which encourages the agent to explore all areas in the action space.

Learning of the value network is done by minimizing . It is the mean squared error between the estimated state value and the targeted state value in the sampled trajectories at time :

(5)

Learning of the policy network is done by minimizing . It is a pessimistic bound of the effectiveness of the policy network at time :

(6)

where is the advantage function [34].

Certain actions may increase the return in extreme situations but may not work in general. To avoid such situation, the algorithm adopts a surrogate clip function and discards actions when their rates-to-change are larger than :

(7)

where is the change rate of actions.

The algorithm employs stochastic gradient ascent (SGA) to optimize both the policy network and the value network. The process continues until no better policy is found.

This PPO-based method alone can be used in other applications. However, for the reasons motivating this paper, we think it should be used in combination with what we will present next – corpus-level document representation and differentiable ranking function.

Build a Global Representation

In this paper, we propose to compress an entire text corpus into a global low-dimensional representation and keep it at all time. Our goal is to enable a DS agent to always gain access to the full state space. We believe it is essential for a DS agent because not being able to reach documents in under-explored areas would mean not being able to recover from early bad decisions.

We summarize the procedure of creating global representation into three steps. First, each document is split into topic-coherent segments. The latest advances in Neural Information Retrieval (NeuIR) have demonstrated the effectiveness of using topical structures for NeuIR [36, 9]. In this work, we follow [36] for segmenting and standardizing documents. Each document is split into a fixed number of segments ( is empirically set to 20). Within each segment, the content is expected to be topic-coherent since the segmentation is done based on Tilebars. Tilebars [10] is a classical text visualisation work and has been proven to be very effective in helping identify relevant documents by visualizing the term matches.

Second, bag-of-Words (BoW) is used as the feature vector for a segment and is of a size equal to the vocabulary’s size . This dimension is usually quite high and could easily reach millions in natural language tasks. Therefore, we compress each segment into a much manageable lower-dimension (). One challenge is that after the compression the relevant and irrelevant documents would be crowed together and difficult to be separated apart. To address this issue, We employ t-SNE [20] for dimension reduction. The idea is based on Barnes-Hut approximation [2]. Assume the high-dimensional input

follows Gaussian distribution. The probability that two random data points

and are neighboring to each other is

(8)

The algorithm then maps these data points in the high dimensional space to points in a much lower dimensional space . Suppose and project into the lower dimension as and . The probability that and are still neighboring to each other is

(9)

To establish the mapping between and , the points’ KL divergence

(10)

is minimized. The solution to the new projection can be achieved step by step via gradient descent.

Third, segments from all documents are stacked together to form a global representation. The global representation is denoted by and its dimensions are . Here is the number of documents, is the number of segments per document, and is the reduced feature dimension. In our work, is empirically set to 3. In this global representation , each row represents a document and each column represents a segment at a certain position in the documents. Each row unfolds the segments horizontally, with their original order in a document preserved. For generality, we make no assumption about the stacking order of documents. The RL agent is expected to complete the search task even when dealing with randomly ordered documents. Figure 2 illustrates the global representation of a toy corpus.

Figure 2: Global representation of a toy corpus (of 5 documents): Documents are segmented and standardized following [36]. Similar colors suggest similar contents. Document 2 is darkened after being visited. Document 4 is currently selected by the RL agent and highlighted with white.

This global representation constructs the states. Our state at time , , has two parts, and the retrieval history of documents from time 1 to :

(11)

where is the set of documents retrieved at time .

In Algorithm 1, already-retrieved documents are marked as visited. In the global representation, it is done by assigning a reserved value to those documents’ feature vectors. When the document is visited, the feature vectors of all its segments, i.e. , are changed to the reserved value. Such change explicitly shows past search history and exploration status at the corpus level.

Retrieve using a Differentiable Ranking Function

It is crucial for an RL agent to employ a differentiable ranking function as its action so that it can perform end-to-end retrieval. Unfortunately, most existing DS approaches still use ranking functions that are non-differentiable.

The existing approaches’ formulation is shown in Eq. 3 . It is is clearly a non-differentiable function. This prevents the RL agent from directly manipulating documents based on user feedback. We propose to omit query reformulation completely, including its heuristic . Since our RL agent would not use any conventional retrieval model, their heuristic parameter is gone, too. The ranking function then becomes:

(12)

We then focus on making differentiable. It is achieved by using a linear formulation for . In our formulation, approximates a document ’s relevance as a linear function over the segments belonging to :

(13)

where is the action that is generated by policy and is the feature vector of the segment in after compression. The action vector can be sampled by the RL agent at each time step. When it gets updated, the new action allows the agent to retrieve and explore a different set of documents.

Get Reward based on User Feedback

In TREC DD [40], the relevance ratings are provided by the simulated user. We define the immediate reward as the accumulated relevance ratings (without discounting) for the retrieved documents. Duplicated results are excluded. The immediate reward is:

(14)

where is a rating given by the simulator. The rating can be a positive number for a relevant document (), or for an irrelevant document (). The larger the rating, the better the retrieved document.

5 Experiments

Experimental Settings

The Text REtrieval Conference (TREC) Dynamic Domain (DD) Tracks 2015 - 2017 [40] provides a standard testbed for DS. A simulated user111https://github.com/trec-dd/trec-dd-jig issues a starting query, and then provides feedback for all the subsequent runs of retrievals. The feedback includes graded document-level and passage-level relevance judgments in the scale of -1 to 4.

We experiment on the TREC DD 2017 Track for its judgements’ completeness. TREC DD 2017 used LDC New York Times collections [30] as its corpus. The collection included more than 1.8 million news articles archived in the past 20 years. The Track released 60 search tasks created by human assessors. Each task consisted of multiple hierarchically-organized subtopics. The subtopics were not made available to the participating DS systems. Instead of post-submission pooling, the Track spent a great deal of efforts in obtaining a complete set of relevant passages before the evaluation started. These answers were used to generate feedback by the simulator. In total, 194 subtopics and 3,816 relevant documents were curated.

Table 1 shows an example DD search topic DD17-10. In this example, the search task is to find relevant information on “ closing of Leaning Tower in Pisa”. Table 2 shows an example interaction history.

Metrics

The evaluation in DS focuses on gaining relevant information throughout the whole process. We adopt multiple metrics to evaluate the approaches from various perspectives. Aspect recall [14] measures subtopic coverage: . Precision and Recall measure the ratios of correctly retrieved documents over the retrieved document set or the entire correct set, respectively: , and . Normalized Session Discounted Cumulative Gain (nsDCG) evaluates the graded relevance for a ranked document list, putting heavier weights on the early retrieved ones [13]: , and .

Topic (DD17-10) Leaning Towers of Pisa Repairs
Subtopic 1 (id: 321) Tourism impact of repairs/closing
Subtopic 2 (id: 319) Repairs and plans
Subtopic 3 (id: 320) Goals for future of the tower
Subtopic 4 (id: 318) Closing of tower
Table 1: Example Search Topic.
Search DD17-10
User:
Leaning Towers of Pisa Repairs
System: Return document 0290537
User: Non-relevant document.
System: Return document 0298897
User:
Relevant on subtopic 320 with a rating of 2,
“No one doubts that it will collapse one
day unless preventive measures are taken.”
System: Return document 0984009
User:
Relevant on subtopic 318 with a rating of 4,
“The 12th-century tower was closed to
tourists in 1990 for fear it might topple.”
Table 2: Example Interaction History.

Systems

We compare CE3 to the most recent DS systems. They were from the TREC DD 2017 submissions. We pick the top submitted run from each team to best represent their approach. The runs are:

Galago [6]:This approach does not use any user feedback. Documents are repeatedly retrieved with the same query at each iteration by Galago. Documents appeared in previous iterations are removed from current iteration.

Deep Q-Network (DQN) [35]: A DQN-based algorithm that selects query reformulation actions such as adding terms and removing terms and uses Galago to retrieve the documents.

Relevance Feedback (RF) [28] : The query is used to first retrieve an initial set of documents using Indri.222https://www.lemurproject.org/indri/ Then the documents are re-ranked by their similarity to the user feedback in all previous iterations. It is a variant of the relevance feedback (RF) model [26].

Results Diversification (DIV) [41]: This approach expands queries based on previous user feedback. The documents retrieved with solr333http://lucene.apache.org/solr/ are then re-ranked with the xQuAD result diversification algorithm [32].

CE3: The proposed method in this paper. For comparison, we also implement a variant, CE3 (doc2vec), which uses doc2vec [15] to compress the feature vector for each segment. The embeddings are trained on more than 1.8 million documents. Other settings are identical between CE3 and CE3 (doc2vec).

Parameters

We construct a collection for each search topic by mixing relevant documents and irrelevant documents at a ratio of 1:1 to simulate a common re-ranking scenario. The corpus size ranges from tens to thousands. Among all the parameter combinations, the following configuration yields the best performance: The dimension of t-SNE’s output is set to 3. The number of segments per document is set to 20. Coefficients and in Eq. 4 are and , respectively. Both the policy and value networks have 2 layers of CNNs and 1 MLP. The first CNN consists of eight kernels and the second consists of 16. The hidden layer of MLP consists of 32 units and is the same for both networks. The output layer of MLP has 3 units for the policy network and 1 for the value network.

Results

Figure 3: Experiment results in the first 10 search iterations.
Time step t=1 t=2 t=3 t= 4 t=5 t=6 t=7 t=8 t=9 t=10
CE3 (doc2vec) 0.0% 11.7% 18.0% 30.0% 35.0% 34.3% 30.0% 25.0% 25.7% 25.0%
CE3 0.0% 3.0% 1.7% 1.0% 3.0% 0.0% 1.0% 0.3% 3.0% 0.0%
Table 3: Percentage of duplicate documents.

From Figure 3, we observe that CE3 outperforms all others in recall (Fig. 3c) and aspect recall (Fig. 3c) at all time. It suggests that our RL agent is able to explore more areas in the state and action spaces than the rest. While other algorithms also manage to achieve a high aspect recall (), they do not perform as well at recall. It shows that although traditional diversification methods can find a few relevant documents for each aspect, it is hard for them to continue the investigation on a visited aspect. This indicates their less effective exploration. Instead, CE3’s ranking function enables end-to-end optimization, which allows the agent to effectively explore at all different directions in the state and action spaces. It thus works very well on recall-oriented measures.

CE3 performs very impressive in precision (Fig. 3b), too. As the search episode develops, all other approaches show declined precision; however, CE3 stays strong at all iterations. We think it is because other methods could not easily recover from early mistakes while CE3’s global representation allows it to explore elsewhere for ne opportunities when a bad decision happens.

Moreover, even not specifically designed for rank-sensitive metrics, CE3 performs very well on nsDCG, too. Results (Fig. 3a) reveal that at the beginning CE3 does not score as high as other methods; however, at the end of the episode, CE3 largely outperforms the rest. We believe the initial successes of other methods are caused by that they are well-tuned to be ranking-sensitive, which is what existing retrieval functions address. However, they seem not to be able to adapt well when the number of interactions increases.

In addition, it comes to our attention that CE3 (doc2vec) is left far behind by CE3. We know that they only differ in their choices to dimension reduction. In a follow-up investigation, we discover that CE3 retrieves much less duplicated documents than CE3 (doc2vec) does. Table 3 reports , the percentage of duplicate documents being retrieved, for the two CE3 variants. We believe it is due to how they compress the feature vectors in a segment. Doc2vec makes no assumption about the data distribution after compression. Vectors trained by doc2vec are probably crowded together and yield more duplicated results. On the contrary, t-SNE helps CE3 separate relevant documents from irrelevant documents, which makes it contribution to CE’s success.

Visualize the Exploration

Figure 4: Visualization of Exploration (Topic DD17-3). White bars mark documents selected by the agent. Successful retrieval means more white in the top half.

We are interested in observing the dynamics during a DS process. Figure 4 illustrates the first 8 steps for a search task with 3 subtopics. Based on the ground truth, we arrange the relevant documents at the top and irrelevant documents at the bottom. Among the relevant documents, those belong to the same subtopic are grouped together and placed in the order of subtopics 1 to 3. The turquoise dotted lines are added to highlight where each subtopic’s are. The white color does not indicate relevance but show the visitations. It highlights which documents the agent returns at time . A thicker white bar indicates more selected documents in the same subtopic. In the case a selected document is relevant to multiple subtopics, it is highlighted in multiple places. The effectiveness of the DS agent is jointly told by the white highlights and their positions: successful retrieval means more white in the top half of this picture.

We observe that at , the DS agent explores at subtopics 1 and 2; while at , it changes to subtopics 1 and 3. At the iteration, the agent seems to enter into a wrong path since the visualization shows that its current selection is in the lower irrelevant portion. However, the agent quickly corrects its actions and improves at and . It confirms that CE3 is able to recover from bad actions.

6 Conclusion

Using Dynamic Search (DS) as an illustrating example, this paper presents a new deep reinforcement learning framework for retrieval-based interactive AI systems. To allow an agent to explore a space fully and freely, we propose to maintain a global representation of the entire corpus at all time. We achieve corpus-level compression by t-SNE dimension reduction. We also propose a novel differentiable ranking function to ensure user feedback can truly control what documents to return. The experimental results demonstrate that our method’s performance is superior to state-of-the-art DS systems. Given the fundamental issues we address in this paper, we believe CE3’s success can be extended to other interactive AI systems.

Acknowledgements

This research was supported by U.S. National Science Foundation IIS-145374. Any opinions, findings, conclusions, or recommendations expressed in this paper are of the authors, and do not necessarily reflect those of the sponsor.

References

  • [1] W. Aissa, L. Soulier, and L. Denoyer (2018) A reinforcement learning-driven translation model for search-oriented conversational systems. In The 2nd International Workshop on Search-Oriented Conversational AI, SCAI@EMNLP ’18, Cited by: §3.
  • [2] J. Barnes and P. Hut (1986) A hierarchical o (n log n) force-calculation algorithm. Nature 324 (6096), pp. 446. Cited by: §4.
  • [3] N. J. Belkin et al. (1993) Interaction with texts: information retrieval as information seeking behavior. Information Retrieval 93, pp. 55–66. Cited by: §3.
  • [4] E. D. Buccio and M. Melucci (2016) Evaluation of a feedback algorithm inspired by quantum detection for dynamic search tasks. In TREC ’16, Cited by: §3.
  • [5] J. Cook, I. Sutskever, A. Mnih, and G. E. Hinton (2007) Visualizing similarity data with a mixture of maps. In AISTATS ’07, Cited by: §2.
  • [6] W. B. Croft, D. Metzler, and T. Strohman (2009) Search engines - information retrieval in practice. Pearson Education. External Links: ISBN 978-0-13-136489-9 Cited by: §5.
  • [7] S. Daronnat, L. Azzopardi, M. Halvey, and M. Dubiel (2019) Human-agent collaborations: trust in negotiating control. In CHI ’19, Cited by: §3.
  • [8] B. Dhingra, L. Li, X. Li, J. Gao, Y. Chen, F. Ahmed, and L. Deng (2017) Towards end-to-end reinforcement learning of dialogue agents for information access. In ACL ’17, Cited by: §2.
  • [9] Y. Fan, J. Guo, Y. Lan, J. Xu, C. Zhai, and X. Cheng (2018) Modeling diverse relevance patterns in ad-hoc retrieval. In SIGIR ’18, Cited by: §4.
  • [10] M. A. Hearst (1995) TileBars: visualization of term distribution information in full text information access. In CHI ’95, Cited by: §4.
  • [11] Y. Hu, Q. Da, A. Zeng, Y. Yu, and Y. Xu (2018) Reinforcement learning to rank in e-commerce search engine: formalization, analysis, and application. In SIGKDD ’18, Cited by: §1.
  • [12] J. Huang and E. N. Efthimiadis (2009) Analyzing and evaluating query reformulation strategies in web search logs. In CIKM ’09, Cited by: §1.
  • [13] K. Järvelin, S. L. Price, L. M. L. Delcambre, and M. L. Nielsen (2008) Discounted cumulated gain based evaluation of multiple-query IR sessions. In ECIR ’08, Cited by: §5.
  • [14] E. Lagergren and P. Over (1998) Comparing interactive information retrieval systems across sites: the TREC-6 interactive track matrix experiment. In SIGIR ’98, Cited by: §5.
  • [15] Q. V. Le and T. Mikolov (2014) Distributed representations of sentences and documents. In ICML ’14, Cited by: §2, §5.
  • [16] C. Li, P. Resnick, and Q. Mei (2016) Multiple queries as bandit arms. In CIKM ’16, Cited by: §1.
  • [17] J. Li, X. Chen, E. H. Hovy, and D. Jurafsky (2016) Visualizing and understanding neural models in NLP. In NAACL ’16, Cited by: §2.
  • [18] J. Luo, X. Dong, and H. Yang (2015) Session search by direct policy learning. In ICTIR ’15, Cited by: §3.
  • [19] J. Luo, S. Zhang, and H. Yang (2014) Win-win search: dual-agent stochastic game in session search. In SIGIR ’14, Cited by: §1.
  • [20] L. v. d. Maaten and G. Hinton (2008) Visualizing data using t-sne. JMLR 9 (Nov), pp. 2579–2605. Cited by: §2, §4.
  • [21] G. Marchionini (2006) Exploratory search: from finding to understanding. ACM Communications 49 (4), pp. 41–46. Cited by: §1.
  • [22] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean (2013) Distributed representations of words and phrases and their compositionality. In NIPS ’13, Cited by: §2.
  • [23] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al. (2015) Human-level control through deep reinforcement learning. Nature 518 (7540), pp. 529. Cited by: §2.
  • [24] F. Moraes, R. L. T. Santos, and N. Ziviani (2016) UFMG at the TREC 2016 dynamic domain track. In TREC ’16, Cited by: §3.
  • [25] S. Reddy, D. Chen, and C. D. Manning (2019) CoQA: A conversational question answering challenge. TACL 7, pp. 249–266. Cited by: §1.
  • [26] S. E. Robertson and K. S. Jones (1976) Relevance weighting of search terms. JASIS 27 (3), pp. 129–146. Cited by: §5.
  • [27] S. Robertson, H. Zaragoza, et al. (2009) The probabilistic relevance framework: bm25 and beyond. Foundations and Trends® in Information Retrieval 3 (4), pp. 333–389. Cited by: §1.
  • [28] K. Rogers and D. W. Oard (2017) UMD_clip: using relevance feedback to find diverse documents for TREC dynamic domain 2017. In TREC ’17, Cited by: §5.
  • [29] G. Salton and C. Buckley (1988) Term-weighting approaches in automatic text retrieval. Inf. Process. Manage. 24 (5), pp. 513–523. External Links: Link, Document Cited by: §1.
  • [30] E. Sandhaus (2008) The new york times annotated corpus. Linguistic Data Consortium, Philadelphia 6 (12), pp. e26752. Cited by: §5.
  • [31] C. Sankar, S. Subramanian, C. Pal, S. Chandar, and Y. Bengio (2019) Do neural dialog systems use the conversation history effectively? an empirical study. In ACL ’19, Cited by: §1.
  • [32] R. L. T. Santos, J. Peng, C. Macdonald, and I. Ounis (2010) Explicit search result diversification through sub-queries. In ECIR ’10, Cited by: §5.
  • [33] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov (2017) Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347. Cited by: §4, §4.
  • [34] R. S. Sutton and A. G. Barto (2018) Reinforcement learning: an introduction. Second edition, The MIT Press. Cited by: §1, §4.
  • [35] Z. Tang and G. H. Yang (2017) A reinforcement learning approach for dynamic search. In TREC ’17, Cited by: §3, §5.
  • [36] Z. Tang and G. H. Yang (2019) DeepTileBars: visualizing term distribution for neural information retrieval. In AAAI ’19, Cited by: Figure 2, §4.
  • [37] R. W. White and R. A. Roth (2009) Exploratory search: beyond the query-response paradigm. Synthesis lectures on information concepts, retrieval, and services 1 (1), pp. 1–98. Cited by: §1.
  • [38] Y. Xue, G. Cui, X. Yu, Y. Liu, and X. Cheng (2014) ICTNET at session track TREC2014. In TREC ’14, Cited by: §3.
  • [39] G. H. Yang, M. Sloan, and J. Wang (2016) Dynamic information retrieval modeling. Synthesis Lectures on Information Concepts, Retrieval, and Services 8 (3), pp. 1–144. Cited by: §1, §3, §3.
  • [40] G. H. Yang, Z. Tang, and I. Soboroff (2017) TREC 2017 dynamic domain track overview. In TREC ’17, Cited by: §3, §4, §5.
  • [41] W. Zhang, Y. Hu, R. Jia, X. Wang, L. Zhang, Y. Feng, S. Yu, Y. Xue, X. Yu, Y. Liu, and X. Cheng (2017) ICTNET at TREC 2017 dynamic domain track. In TREC ’17, Cited by: §5.