Just Ask:An Interactive Learning Framework for Vision and Language Navigation

12/02/2019 ∙ by Ta-Chung Chi, et al. ∙ Amazon Carnegie Mellon University 0

In the vision and language navigation task, the agent may encounter ambiguous situations that are hard to interpret by just relying on visual information and natural language instructions. We propose an interactive learning framework to endow the agent with the ability to ask for users' help in such situations. As part of this framework, we investigate multiple learning approaches for the agent with different levels of complexity. The simplest model-confusion-based method lets the agent ask questions based on its confusion, relying on the predefined confidence threshold of a next action prediction model. To build on this confusion-based method, the agent is expected to demonstrate more sophisticated reasoning such that it discovers the timing and locations to interact with a human. We achieve this goal using reinforcement learning (RL) with a proposed reward shaping term, which enables the agent to ask questions only when necessary. The success rate can be boosted by at least 15 one question asked on average during the navigation. Furthermore, we show that the RL agent is capable of adjusting dynamically to noisy human responses. Finally, we design a continual learning strategy, which can be viewed as a data augmentation method, for the agent to improve further utilizing its interaction history with a human. We demonstrate the proposed strategy is substantially more realistic and data-efficient compared to previously proposed pre-exploration techniques.



There are no comments yet.


page 1

page 5

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Consider the situation in which you want a robot assistant to get your wallet on the bed as in Figure 1 with two doors in the scene and an instruction that only tells it to walk through the doorway. In this situation, it is clearly difficult for the robot to know exactly through which door to enter. If, however, the robot is able to discuss the situation with the user, the situational ambiguity can be resolved. For example, the agent can ask the user “I am confused, please tell me which door to take?” and displays a snapshot on the user’s smartphone of what it sees through its camera. The agent can then decide its next action by also considering the user’s response.

This scenario suggests that interactive robots can get simple advice from their users to improve completion of tasks, in contrast to their passive counterparts that have no way of getting feedback when problems occur. Indeed, we note that the recent success of virtual assistants can be attributed to their interactive ability with users, demonstrating several human-alike behaviors, such as asking for more information, clarification, and confirmation, which is useful for resolving ambiguities arising naturally in real-world tasks. Unfortunately, we do not notice such interactive behavior in physical robots. To the best of our knowledge, existing works [18, 3, 2, 1, 4] required the robot to complete tasks by itself after the input of preliminary goals and instructions. It has no way to resolve confusions or ambiguities while executing its task, motivating this work’s proposed interactive framework. We use the term robot and agent interchangeably hereafter since our robot lives in a simulator, which can be viewed as a virtual agent.

We propose to extend the Vision and Language Navigation task (VLN)  [1] that evaluates how well an agent can learn to navigate in an indoor environment and reach a target location by following natural language instructions provided by a human user. To achieve this goal, the agent has to simultaneously understand visual surroundings and natural language. The inherently ambiguous natural language instructions and the complexity of the environment may jointly cause confusion and impede the robot’s progress. In addition to the previous two doors example (shown as Figure 1), we observe many vague instructions in the VLN dataset, such as “walk a bit”, where the distance to walk is not clear. In both cases, an agent cannot determine the correct action to execute by only relying on the visual cues and natural language instructions.

Recent approaches to address these difficulties can be mainly characterized into two directions: the first is to explore a better learning framework such that the agent rolls back to previous states when it is confused [29, 17, 11], and the second is to rely on semi-supervised data augmentation [8, 25]. In the first line of research, the length of traversed trajectories tend to be much longer than necessary since the agent has to explore the unknown by itself. On the other hand, the data augmentation approach suffers from several problems including: 1) Previous methods sample a lot of trajectories in the test environment, which would be time and resource inefficient if it were pursued by a robot in real life and 2) these approaches generate augmented data in a user-agnostic way, such that the agent cannot bootstrap off of user-specific patterns. With these drawbacks in mind, we ask what a real human would do in such scenarios with insufficient information. The answer is simple – just ask for help.

To investigate the interactive behavior in a principled way, we propose three critical aspects for a learning framework:

  • Temporal Resolution - What is the ideal timing of interactions in the agent action sequence?

  • Question Category - What type of question should be asked? (i.e. request, disambiguation, confirmation, etc.)

  • Interaction Form - How to properly formulate agent questions and human responses for the communication to be efficient?

In this work, for Temporal Resolution, the timing is learned either naturally during the navigation or by leveraging human expert knowledge. For Question Category, we focus on the request question, namely “Which action should the robot take next, amongst the possible next actions?”. For Interaction Form, we always use the previous request question, and the human response is the correct next action in the agent’s action space. Other types of questions, such as “Shall I turn left?” (confirmation), “Shall I turn left or right?” (disambiguation), and how to effectively generate responses in natural language are left as future work.

Two agent models are proposed in this work. The simpler model, called Model Confusion or MC, mimics human user behavior under confusion. The more complicated model, called Action Space Augmentation or ASA, is an RL agent with the action space designed specifically to include questions. It automatically learns to ask only necessary questions at the right timing during the navigation thanks to a proposed reward shaping mechanism. To better simulate real-world noise, we design a realistic way to distort answers given by users, while only the high-level RL agent adapts dynamically to different levels of noise.

While the second agent achieves a high success rate, it still struggles with the problem of asking questions in similar situations. To address this concern, we gather the human-agent interaction data, which is used to fine-tune the agent further such that it gets familiar with the environment.

Overall, the main contributions of our work are four-fold:

  • We are among the first to introduce human-agent interaction in the instruction-based navigation task, focusing on successful task completion with minimum questions to users.

  • We propose two interaction methodologies, MC and ASA, that allow the agent to benefit from human-in-the-loop learning.

  • We design a simulated user for responding to agent questions and propose alternative ways of creating realistic response data.

  • We use the proposed approach as a data augmentation method, which is useful in a continual learning scenario, such that the agent can improve its performance continually in customers’ home.

2 Related Work

Instruction-Based Navigation

The instruction-based navigation tasks that use natural language and vision to perform robot navigation have been investigated extensively, including works done in synthetic environments [18, 3, 2], or agents trained in photo-realistic environments [1, 20, 4]. The VLN task [1] has received significant attention recently. Several works in this line designed more powerful agents by using panoramic views [8], a better exploration strategy [17, 11, 16], or generating more diverse environments as training data [25][29] proposed the cross-modal intrinsic reward for better training. However, due to the lack of interaction ability, the best strategy used by these works for the situation in Figure 1 would only be a random guess.

Human Robot Interaction

Human robot interaction has long been investigated in the artificial intelligence field 

[9], specifically using dialogue as the interaction format to complete physical tasks [15, 22, 7]. Recently, an end-to-end pipeline was presented [28] for translating natural language commands to discrete robot actions. Clarification dialogues are used to improve language parsing and concept grounding. However, the dialogues only take place before the navigation process, ignoring the possibility that confusions may arise throughout the navigation. Another work [27] proposed to integrate human-agent interaction by introducing dialogue behavior into the VLN task. The main contribution is a human-to-human dialogue dataset for the navigation task, where two crowd workers are asked to complete the navigation task by interacting with each other. We foresee that the interaction dynamic between two human might be different from that between a human and an agent, that is, agent questions do not necessarily have to be based on human-human confusions, and instead should be based on the modeling approach used and the confusions of the models. Furthermore, the ultimate task in our opinion is successful task completion during navigation. Hence our proposed framework mainly focuses on when and what to ask for optimizing task success, with a minimum additional load on the user (i.e. the agent is expected to learn to ask the minimum or strictly necessary number of questions).

Active Learning

The first line of prior works utilize human feedback in an off-policy (i.e no “ask” action in the action space) manner. For example, [5]

tries to fit the reward function using human comparison on collected trajectories. However, due to its offline training nature, it’s hard to tell how human comparisons can benefit the agent directly during test time. In contrast, our agent learns the “ask” behavior in an on-policy manner (i.e our “ask” action is in the action space), which means that the agent can still actively ask for help from a human to complete the navigation. The second line of works rely on some pre-defined metrics and recovery heuristics to ask for human help. For example, 

[23] encourages the agent to ask for demonstrations when the discrepancy of a state is high. [12, 26] compare the expected state of the world to the actual state, and if the robot asks for help, a human will repair the failure condition. However, recovery heuristics require human effort for every added argument and do not generalize well. Moreover, these models do not learn the timing for help, which may suffer from the drawback as in our MC agent.

Data Augmentation

Data augmentation has been shown to be an effective way to further boost the performance of the agent as pointed out in [8, 16, 25]. The common approach, borrowing inspiration from unsupervised machine translation [21, 14], is to sample some trajectories in the environment and use the speaker model [8] to generate fake natural language instructions. They benefit from their unsupervised nature since there is no need for human effort. However, the amount of the augmented data is very large, which is only feasible during the training phase assuming that we have a simulator. Others [29, 25] extended this strategy to the unseen/test environment, but the agent still has to explore the environment for a large number of turns, which is too energy-inefficient and slow for the real world. Moreover, the agent cannot learn user-specific characteristics using the generated and fake instructions. In contrast, the human-agent interaction we propose can be used to generate augmented data naturally and efficiently, which only needs human to answer a few questions. Since the instructions are generated by human, they are user-specific by nature. This is particularly useful when the goal is to let a human teach the agent with minimum effort.

3 Problem Formulation

Our task is the room-to-room navigation problem [1]. The agent is given a natural language instruction with words, , where each is a word token. This instruction describes the navigation route , which is represented by a sequence of viewpoints, . We use the terms location and viewpoints interchangeably hereafter, because the simulator used in this work does not support continuous movement between different viewpoints, so the location of the agent must be on a viewpoint. The agent starts from the start location , going to the target location . Since determining when to stop is critical, we denote the final location that the agent decides to stop at as .

We formulate this navigation problem as a Markov Decision Process (MDP). At time step

, the state of the agent is and the possible action space is . Formally, the state can be represented by the agent’s 36-discretized panoramic view , and its corresponding horizontal heading and vertical elevation :


where is between 1 to 36 and

is a feature vector derived from a ResNet 


pretrained on imagenet 


For each of the 36 viewing angles, the environment provides the corresponding next navigable locations. The collection of these locations and a stop action to end the navigation constitute the action space , where each location is also represented by Eq. (1). Note that may not be the same for different time step . This number is determined by the simulator, since the agent may be blocked by obstacles at some of the 36 viewing angles. Once the agent picks an action , the agent moves to a new location with state . To solve the MDP, our baseline model follows the previous idea [25]

, which leverages imitation learning and RL techniques.

3.1 Interaction Ability

The agent may encounter ambiguities or get lost during the navigation. Therefore, it is desirable to endow the agent with the ability to interact with a human. Whenever the agent is confused, it sends out a signal “I am lost, please help me!” to the simulated user and asks for help. We assume the simulated user is an oracle , which means it knows where the agent is and returns the next shortest path action to

. However, this is only feasible in the simulated environment because real users make mistakes when giving step-wise instructions due to various reasons, including the complex 3D environment. To simulate this test time error, we assume that users make mistakes with probability

. In this case, we calculate the angular differences between the shortest path action and the remaining ones, which are then normalized by a softmax function. We do not sample randomly because even when users make mistakes, we assume that they are more likely to provide actions close to the shortest path action. Formally, . The distorted answer is sampled from this distribution.

Figure 2: A natural language instruction is given in the beginning. As navigation proceeds, the agent decodes the action probability based on actions taken and the current visual clue. The upper one is the MC agent. If the difference of the top2 action probability is less than a threshold, it asks a question and the oracle answers it. The lower one is the ASA agent. There is an additional red bin indicating the ask action. If the agent picks the ask, it sends a signal to the oracle for help. After the agent is trained, we can run it in the unseen environment, collecting trajectories as augmented data.

4 Model

4.1 Architecture

Our base model architecture is inspired by previous work in VLN [8, 25]. Each word of the instruction is represented by its word vector:


A bidirectional LSTM model is used to encode the instruction:


where is the concatenation of forward and backward output. The attentive panoramic view serves as the visual input:


with the weight calculated as:


where is the previous instruction-aware hidden state, and is a learnable matrix.

The decoder is an auto-regressive LSTM, where the input of every time step is the concatenation of the previous action and the attentive visual feature:


It is desirable that the agent focus on the right part of the instruction throughout the navigation process. Hence we calculate the attentive instruction:


The weight is calculated over all words of instruction:


is a learnable matrix. The hidden state is calculated as:


This instruction-aware hidden state is passed to next time step and used for computing the action distribution at current time step .


is calculated as the same as Eq. (1) but on the next navigable locations.

4.2 Optimization

Whenever the agent goes to a viewpoint at time step , the environment computes , which is the shortest path from to the target location . The action going from to

is returned as the teacher action for supervision signal. For the supervised learning loss, we calculate the cross entropy between the teacher action and

. For RL, the agent samples its actual action from the action distribution computed in Eq. (10). As in previous work, a +2 reward is given if the final location is within 3 meters distance to . Otherwise, the reward given is -2. We use the advantage actor critic (A2C) [13] as our RL algorithm.

5 Methodologies

We introduce two agents as in Figure 2 of different levels for the human-agent interaction along with the data augmentation strategy that makes use of the interaction data.

5.1 Training

Model Confusion (MC)

In our MC model, the idea is that if the agent is confident of itself, then the predicted action distribution should be sharp. To quantify the confusion intuition, we first sort action probabilities in a decreasing order, and say an agent is confused if the difference of the top two actions is less than a threshold :


we provide the agent with the shortest path action. The threshold is used to control the degree of confusion.

In this simple mode, since the timing of whether to ask questions is not trained, we use the original action space without ask. Note that this method can be applied directly on pre-trained models described in sec. 4 during test time.

Action Space Augmentation (ASA)

We introduce as many actions as the types of questions the agent asks in addition to the original action space. In this work, the action space is enlarged by 1 to represent the ”What should I do next?” question, which is used to indicate whether to ask for help. Formally, the new action space is , where is the question indicator. If the agent chooses the action, it will remain in the same state and will give it the action on the shortest path route to . Each selected is associated with a negative reward , such that to ensure only necessary questions are asked. The action probability becomes:


We represent the action feature of ask by a vector of dimension same as consisting of all ones.

Enlarging the action space alone does not provide the desired question-asking behavior without the help from reward shaping [19], which is a useful technique in training RL algorithms. The difference of distance to the goal location between two consecutive steps can be used as the shaping reward [29]. However, this will encourage the agent to bias toward the shortest path, but not follow the path indicated by the instruction. We propose to use an additional reward shaping term, the deviation shaping, to encourage the agent to follow the instruction. We call the original shaping reward as the distance shaping:


where is the distance between the current location and the target location. The proposed deviation shaping is:


is the viewpoint with the shortest distance to among the whole ground truth trajectory. It is straightforward to see that if the agent follows the ground truth trajectory better, the reward should be higher. Moreover, this shaping reward helps reduce the number of questions asked while preserving the same success rate. The reason is better alignment with ground truth trajectories leads to less ambiguities during the navigation. Without DEV, the agent will ask questions at every timestep. These two shaping rewards are summed together during the RL training. Concretely, the critic in the A2C algorithm is optimized at every timestep as:



is the discounted cumulative reward estimated by the monte carlo method 

[24]. We note that the intrinsic reward in [29] shares a similar idea with our work. However, they train a sequence-to-sequence critic where the input is traversed trajectories and output is instruction decoding probabilities. This critic is used to calculate the cycle reconstruction reward at every timestep, which is much slower than our simple but effective deviation shaping.

Figure 3: This is the data augmentation process. The green dashed boxes indicate the training procedure in [25]. Agents are not allowed to ask questions at step 3. Finally, generalizability is tested with different trajectories and instructions () from the input () of step 3. are described in sec. 6.4.
Agent Types success rate number of questions move steps ask percentage
val_seen val_unseen val_seen val_unseen val_seen val_unseen val_seen val_unseen
Base Model w/o Interaction 0.551 0.471 - - 5.09 4.95 - -
MC 0.614 0.562 0.58 0.56 5.01 4.88 0.103 0.103
0.2 0.693 0.633 1.00 0.96 5.01 4.88 0.167 0.164
0.3 0.759 0.695 1.34 1.33 5.03 4.88 0.210 0.214
0.4 0.822 0.756 1.67 1.68 5.05 4.90 0.248 0.255
0.5 0.854 0.807 1.95 1.99 5.05 4.92 0.278 0.287
ASA 0.790 0.732 1.25 1.92 5.39 5.49 0.188 0.259
0.2 0.690 0.710 0.84 1.59 5.58 5.72 0.131 0.218
0.3 0.704 0.676 0.74 1.41 5.36 5.42 0.120 0.206
0.4 0.632 0.598 0.38 0.85 5.29 5.25 0.067 0.139
0.5 0.590 0.494 0.06 0.15 5.04 4.85 0.012 0.030
Human-Guided Exploration disjoint† 0.555 0.554 - - 5.19 4.91 - -
random 0.565 0.649 - - 5.18 4.93 - -
Pre-Exploration† 0.560 0.504 - - 5.21 5.11 - -
Table 1: Interactive agents, MC and ASA, both outperform the baseline without interaction. When is larger, MC tends to ask more questions while ASA asks fewer questions when is lower. The move steps of MC remain the same with different , but it increases when of ASA decreases. This is probably because ASA learns the ask behavior during training, so it tries to explore more to maximize the reward. Finally, the proposed human-guided exploration outperforms the pre-exploration techinque [25] by 5%. †We use the augmented data with size 1500 for a fair comparison.

5.2 Data Augmentation

Our proposed interaction methods can be used to generate augmented data. Concretely, we execute the trained agent in test environment such that the agent may ask questions and receive answers from . The advantage of doing so is that by answering a few simple questions, the originally wrong trajectories might be corrected. These corrected trajectories serve as augmented data to prevent the agent from making same mistakes. The complete procedure is outlined in Figure 3. As long as users keep using the agent, we can collect more interaction data to fine-tune the agent in a continual learning scenario. The differences between our human-guided exploration, and the pre-exploration approach [29, 25] are highlighted in Table 2.

human-guided exploration pre-exploration
instructions real, user-specific fake, user-agnostic
trajectories not shortest, traversed shortest, sampled
Table 2: The differences between the human-guided exploration versus pre-exploration.

6 Experiments and Discussions

We describe the R2R dataset used and the performance of our model. We propose an additional evaluation metric to measure the effectiveness of the interactive behavior in the VLN task. The impact of the imperfect oracle is also investigated. Finally, we compare our data augmentation method with previous work in terms of data efficiency.

6.1 R2R Data Statistics

In the R2R dataset [1], annotators are given image sequences of sampled shortest path trajectories, then they write down natural language instructions that best describe the paths. The dataset contains 21,567 navigation instructions with an average length of 29 words. The instruction vocabulary consists of around 3100 words due to the nature of the navigation task. The train set includes 61 scenes, with instructions split 14,025 train / 1,020 val seen. 11 scenes and 2,349 instructions are reserved for validating in unseen environments (unseen validation).

success rate number of questions
ASA-0.1 MC-0.5 ASA-0.1 MC-0.5
0.0 0.732 0.807 1.92 1.99
0.1 0.731 0.743 2.25 1.99
0.2 0.731 0.688 2.69 1.99
0.3 0.732 0.619 3.55 1.99
0.4 0.722 0.562 4.44 1.94
Table 3: The impact of different noise levels. The 0.1 after ASA indicates the in Eq. (15), and the 0.5 after MC indicates the in Eq. (11).

6.2 Evaluation Metric

We evaluate our agent on the success rate and the number of steps taken which are the standard reported metrics [1] of the VLN task. An episode is a success if the navigation error is less than 3 meters. In addition to the standard metrics, we think it is necessary to propose a new evaluation metric to justify the effectiveness of the human-agent interaction, which is the percentage of total actions taken that are ask actions: where is the number of moving actions other than .

Figure 4: The performance of different agents. We can only control the threshold or penalty during training, so the points of the two curves would not be plotted on the same x values.

6.3 Interaction Results

In this setting, the agent is allowed to ask questions during test time with an oracle providing shortest path actions. For the ASA agent, we vary the penalty associated, , to each ask action. For the MC agent, we adjust the confusion threshold in Eq. (11). Then we test the agent on the unseen validation split. The results are in Table 1.

For the MC agent, the success rate and ask percentage become higher when we increase the threshold . The same observation applies to the ASA agent. Note that with a lower penalty , the agent is encouraged to ask more questions. At the same time, the success rate and the ask percentage both increase. If we compare the two agents, the performance is roughly the same regarding the success rate at the same ratio of the ask actions as shown in Figure 4. It is interesting to note that the move steps of the ASA agent increases while the MC agent remains the same. We hypothesize this is because ASA learns the ask behavior during training, so it tries to explore more to maximize the reward. As for MC, the threshold is applied only at test time, it does not learn the exploration behavior. While MC seems to be simpler and more effective than ASA, the ASA agent can adapt to errors in human-agent interactions more easily as we will see.

Imperfect Oracle

We adjust the distortion probability in sec. 3.1 to see how our agents adapt to different levels of noise. The results are in Table 3. The ASA agent asks more questions with the same success rate while the MC agent asks the same number of questions but the success rate drops linearly. The behavior of ASA is more ideal, since it can adjust dynamically to different levels of noise with the same success rate, which is particularly useful if the agent is a real product.

6.4 Human-Guided Exploration

It is desirable that the agent can further improve its navigation ability after several rounds of interactions. We split the unseen validation data to and . This is testing the real use case when a user brings the agent to a new environment () and teaches it through interaction for a while. After that, the agent is evaluated in the same environment with different instructions and paths () to see the effectiveness.

Disjoint Split

In this setting, and do not share the same trajectories and instructions but the house plans are shared. The reason for this splitting strategy is to compare fairly with the pre-exploration technique since [25, 29] ensured the augmented paths are different from those in the test environment. We run the same experiment with ASA and MC agents on and obtain the interaction history data, which is used to fine-tune the agent. Finally, we test the re-trained agent on without human-agent interaction. There are 2349 instructions in the unseen environment. We use the first 1500 as and the remaining 849 as .

Random Split

In this setting, and may share the same trajectories but with different instructions. The motivation is to mimic the real-world scenario where customers buy the robot and put it in their houses. It is natural for a human to use different sentences to express the same goal. We randomly permute the unseen validation split and use the first 1500 as and the remaining 849 as .

The results of two settings are in Table 1. We use the best ASA () agent to generate augmented data. Better performance is observed on the random split setting because the agent has already seen some trajectories in . As for the disjoint setting, we can compare the results of ASA agent fairly with [25]. With the same amount of augmented data (1500 on ), our method outperforms theirs by 5% (0.554 vs. 0.504).

Finally, we limit the fine-tune stage (stage 3 in Figure 3) to only supervised training instead of the mixture of supervised and RL training. This is to further reduce the time and energy consumption in test environment which in real-life would be a new customer’s home. The result is 0.514 vs. 0.483. While the performance of both methods drops due to the lack of exploration, ours still outperforms the baseline by 3%.

Data Efficiency

We vary the augmented data size of the pre-exploration approach to see its data efficiency. The results are in Figure 5. It is easy to see that human-guided exploration can reach the same performance by using much less data, demonstrating that our agent is more data-efficient. Moreover, this experiment highlights the importance of real instructions and trajectories.

Figure 5: The curves of different data augmentation methods. Human-Guided Exploration consistently outperforms Pre-Exploration by a large margin.

7 Conclusion

In this paper, we propose an interactive learning framework to make the agent capable of resolving ambiguous situations by interacting with a human during learning or execution time. Two approaches are proposed, model confusion-based (MC) and RL with reward shaping (ASA). Experiment results demonstrate our agents can strike a balance between the task success rate and number of questions asked. Moreover, the RL agent can adapt dynamically to noise. Finally, we propose a strategy to fine-tune the agent using augmented data collected from human-agent interactions, which is more data-efficient and realistic than the previous method.


  • [1] P. Anderson, Q. Wu, D. Teney, J. Bruce, M. Johnson, N. Sünderhauf, I. Reid, S. Gould, and A. van den Hengel (2018) Vision-and-language navigation: interpreting visually-grounded navigation instructions in real environments. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    pp. 3674–3683. Cited by: Just Ask: An Interactive Learning Framework for Vision and Language Navigation, §1, §1, §2, §3, §6.1, §6.2.
  • [2] V. Blukis, D. Misra, R. A. Knepper, and Y. Artzi (2018) Mapping navigation instructions to continuous control actions with position-visitation prediction. arXiv preprint arXiv:1811.04179. Cited by: §1, §2.
  • [3] D. L. Chen and R. J. Mooney (2011) Learning to interpret natural language navigation instructions from observations. In Twenty-Fifth AAAI Conference on Artificial Intelligence, Cited by: §1, §2.
  • [4] H. Chen, A. Suhr, D. Misra, N. Snavely, and Y. Artzi (2019) Touchdown: natural language navigation and spatial reasoning in visual street environments. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 12538–12547. Cited by: §1, §2.
  • [5] P. F. Christiano, J. Leike, T. Brown, M. Martic, S. Legg, and D. Amodei (2017) Deep reinforcement learning from human preferences. In Advances in Neural Information Processing Systems, pp. 4299–4307. Cited by: §2.
  • [6] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei (2009) ImageNet: A Large-Scale Hierarchical Image Database. In CVPR09, Cited by: §3.
  • [7] T. Fong, C. Thorpe, and C. Baur (2003) Collaboration, dialogue, human-robot interaction. In Robotics Research, pp. 255–266. Cited by: §2.
  • [8] D. Fried, R. Hu, V. Cirik, A. Rohrbach, J. Andreas, L. Morency, T. Berg-Kirkpatrick, K. Saenko, D. Klein, and T. Darrell (2018) Speaker-follower models for vision-and-language navigation. In Advances in Neural Information Processing Systems, pp. 3314–3325. Cited by: §1, §2, §2, §4.1.
  • [9] M. A. Goodrich, A. C. Schultz, et al. (2008) Human–robot interaction: a survey. Foundations and Trends® in Human–Computer Interaction 1 (3), pp. 203–275. Cited by: §2.
  • [10] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: §3.
  • [11] L. Ke, X. Li, Y. Bisk, A. Holtzman, Z. Gan, J. Liu, J. Gao, Y. Choi, and S. Srinivasa (2019) Tactical rewind: self-correction via backtracking in vision-and-language navigation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6741–6749. Cited by: §1, §2.
  • [12] R. A. Knepper, S. Tellex, A. Li, N. Roy, and D. Rus (2015) Recovering from failure by asking for help. Autonomous Robots 39 (3), pp. 347–362. Cited by: §2.
  • [13] V. R. Konda and J. N. Tsitsiklis (2000) Actor-critic algorithms. In Advances in neural information processing systems, pp. 1008–1014. Cited by: §4.2.
  • [14] G. Lample, A. Conneau, L. Denoyer, and M. Ranzato (2017) Unsupervised machine translation using monolingual corpora only. arXiv preprint arXiv:1711.00043. Cited by: §2.
  • [15] L. S. Lopes and A. Teixeira Human-robot interaction through spoken language dialogue. In Proceedings. 2000 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2000)(Cat. No. 00CH37113), Vol. 1, pp. 528–534. Cited by: §2.
  • [16] C. Ma, J. Lu, Z. Wu, G. AlRegib, Z. Kira, R. Socher, and C. Xiong (2019) Self-monitoring navigation agent via auxiliary progress estimation. External Links: Link Cited by: §2, §2.
  • [17] C. Ma, Z. Wu, G. AlRegib, C. Xiong, and Z. Kira (2019) The regretful agent: heuristic-aided navigation through progress estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6732–6740. Cited by: §1, §2.
  • [18] M. MacMahon, B. Stankiewicz, and B. Kuipers (2006) Walk the talk: connecting language, knowledge, and action in route instructions. Def 2 (6), pp. 4. Cited by: §1, §2.
  • [19] A. Y. Ng, D. Harada, and S. Russell (1999) Policy invariance under reward transformations: theory and application to reward shaping. Cited by: §5.1.
  • [20] M. Savva, A. Kadian, O. Maksymets, Y. Zhao, E. Wijmans, B. Jain, J. Straub, J. Liu, V. Koltun, J. Malik, et al. (2019) Habitat: a platform for embodied ai research. arXiv preprint arXiv:1904.01201. Cited by: §2.
  • [21] R. Sennrich, B. Haddow, and A. Birch (2015)

    Improving neural machine translation models with monolingual data

    arXiv preprint arXiv:1511.06709. Cited by: §2.
  • [22] D. Spiliotopoulos, I. Androutsopoulos, and C. D. Spyropoulos Human-robot interaction based on spoken natural language dialogue. Cited by: §2.
  • [23] K. Subramanian, C. L. Isbell Jr, and A. L. Thomaz (2016) Exploration from demonstration for interactive reinforcement learning. In Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems, pp. 447–456. Cited by: §2.
  • [24] R. S. Sutton (1998) Introduction to reinforcement learning. Vol. 2. Cited by: §5.1.
  • [25] H. Tan, L. Yu, and M. Bansal (2019) Learning to navigate unseen environments: back translation with environmental dropout. arXiv preprint arXiv:1904.04195. Cited by: §1, §2, §2, §3, §4.1, Figure 3, §5.2, Table 1, §6.4, §6.4.
  • [26] S. Tellex, R. Knepper, A. Li, D. Rus, and N. Roy (2014) Asking for help using inverse semantics. Cited by: §2.
  • [27] J. Thomason, M. Murray, M. Cakmak, and L. Zettlemoyer (2019) Vision-and-dialog navigation. CoRR abs/1907.04957. External Links: Link, 1907.04957 Cited by: §2.
  • [28] J. Thomason, A. Padmakumar, J. Sinapov, N. Walker, Y. Jiang, H. Yedidsion, J. Hart, P. Stone, and R. J. Mooney (2019) Improving grounded natural language understanding through human-robot dialog. arXiv preprint arXiv:1903.00122. Cited by: §2.
  • [29] X. Wang, Q. Huang, A. Celikyilmaz, J. Gao, D. Shen, Y. Wang, W. Y. Wang, and L. Zhang (2019) Reinforced cross-modal matching and self-supervised imitation learning for vision-language navigation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6629–6638. Cited by: §1, §2, §2, §5.1, §5.2, §6.4.