Consider the situation in which you want a robot assistant to get your wallet on the bed as in Figure 1 with two doors in the scene and an instruction that only tells it to walk through the doorway. In this situation, it is clearly difficult for the robot to know exactly through which door to enter. If, however, the robot is able to discuss the situation with the user, the situational ambiguity can be resolved. For example, the agent can ask the user “I am confused, please tell me which door to take?” and displays a snapshot on the user’s smartphone of what it sees through its camera. The agent can then decide its next action by also considering the user’s response.
This scenario suggests that interactive robots can get simple advice from their users to improve completion of tasks, in contrast to their passive counterparts that have no way of getting feedback when problems occur. Indeed, we note that the recent success of virtual assistants can be attributed to their interactive ability with users, demonstrating several human-alike behaviors, such as asking for more information, clarification, and confirmation, which is useful for resolving ambiguities arising naturally in real-world tasks. Unfortunately, we do not notice such interactive behavior in physical robots. To the best of our knowledge, existing works [18, 3, 2, 1, 4] required the robot to complete tasks by itself after the input of preliminary goals and instructions. It has no way to resolve confusions or ambiguities while executing its task, motivating this work’s proposed interactive framework. We use the term robot and agent interchangeably hereafter since our robot lives in a simulator, which can be viewed as a virtual agent.
We propose to extend the Vision and Language Navigation task (VLN)  that evaluates how well an agent can learn to navigate in an indoor environment and reach a target location by following natural language instructions provided by a human user. To achieve this goal, the agent has to simultaneously understand visual surroundings and natural language. The inherently ambiguous natural language instructions and the complexity of the environment may jointly cause confusion and impede the robot’s progress. In addition to the previous two doors example (shown as Figure 1), we observe many vague instructions in the VLN dataset, such as “walk a bit”, where the distance to walk is not clear. In both cases, an agent cannot determine the correct action to execute by only relying on the visual cues and natural language instructions.
Recent approaches to address these difficulties can be mainly characterized into two directions: the first is to explore a better learning framework such that the agent rolls back to previous states when it is confused [29, 17, 11], and the second is to rely on semi-supervised data augmentation [8, 25]. In the first line of research, the length of traversed trajectories tend to be much longer than necessary since the agent has to explore the unknown by itself. On the other hand, the data augmentation approach suffers from several problems including: 1) Previous methods sample a lot of trajectories in the test environment, which would be time and resource inefficient if it were pursued by a robot in real life and 2) these approaches generate augmented data in a user-agnostic way, such that the agent cannot bootstrap off of user-specific patterns. With these drawbacks in mind, we ask what a real human would do in such scenarios with insufficient information. The answer is simple – just ask for help.
To investigate the interactive behavior in a principled way, we propose three critical aspects for a learning framework:
Temporal Resolution - What is the ideal timing of interactions in the agent action sequence?
Question Category - What type of question should be asked? (i.e. request, disambiguation, confirmation, etc.)
Interaction Form - How to properly formulate agent questions and human responses for the communication to be efficient?
In this work, for Temporal Resolution, the timing is learned either naturally during the navigation or by leveraging human expert knowledge. For Question Category, we focus on the request question, namely “Which action should the robot take next, amongst the possible next actions?”. For Interaction Form, we always use the previous request question, and the human response is the correct next action in the agent’s action space. Other types of questions, such as “Shall I turn left?” (confirmation), “Shall I turn left or right?” (disambiguation), and how to effectively generate responses in natural language are left as future work.
Two agent models are proposed in this work. The simpler model, called Model Confusion or MC, mimics human user behavior under confusion. The more complicated model, called Action Space Augmentation or ASA, is an RL agent with the action space designed specifically to include questions. It automatically learns to ask only necessary questions at the right timing during the navigation thanks to a proposed reward shaping mechanism. To better simulate real-world noise, we design a realistic way to distort answers given by users, while only the high-level RL agent adapts dynamically to different levels of noise.
While the second agent achieves a high success rate, it still struggles with the problem of asking questions in similar situations. To address this concern, we gather the human-agent interaction data, which is used to fine-tune the agent further such that it gets familiar with the environment.
Overall, the main contributions of our work are four-fold:
We are among the first to introduce human-agent interaction in the instruction-based navigation task, focusing on successful task completion with minimum questions to users.
We propose two interaction methodologies, MC and ASA, that allow the agent to benefit from human-in-the-loop learning.
We design a simulated user for responding to agent questions and propose alternative ways of creating realistic response data.
We use the proposed approach as a data augmentation method, which is useful in a continual learning scenario, such that the agent can improve its performance continually in customers’ home.
2 Related Work
The instruction-based navigation tasks that use natural language and vision to perform robot navigation have been investigated extensively, including works done in synthetic environments [18, 3, 2], or agents trained in photo-realistic environments [1, 20, 4]. The VLN task  has received significant attention recently. Several works in this line designed more powerful agents by using panoramic views , a better exploration strategy [17, 11, 16], or generating more diverse environments as training data .  proposed the cross-modal intrinsic reward for better training. However, due to the lack of interaction ability, the best strategy used by these works for the situation in Figure 1 would only be a random guess.
Human Robot Interaction
Human robot interaction has long been investigated in the artificial intelligence field, specifically using dialogue as the interaction format to complete physical tasks [15, 22, 7]. Recently, an end-to-end pipeline was presented  for translating natural language commands to discrete robot actions. Clarification dialogues are used to improve language parsing and concept grounding. However, the dialogues only take place before the navigation process, ignoring the possibility that confusions may arise throughout the navigation. Another work  proposed to integrate human-agent interaction by introducing dialogue behavior into the VLN task. The main contribution is a human-to-human dialogue dataset for the navigation task, where two crowd workers are asked to complete the navigation task by interacting with each other. We foresee that the interaction dynamic between two human might be different from that between a human and an agent, that is, agent questions do not necessarily have to be based on human-human confusions, and instead should be based on the modeling approach used and the confusions of the models. Furthermore, the ultimate task in our opinion is successful task completion during navigation. Hence our proposed framework mainly focuses on when and what to ask for optimizing task success, with a minimum additional load on the user (i.e. the agent is expected to learn to ask the minimum or strictly necessary number of questions).
The first line of prior works utilize human feedback in an off-policy (i.e no “ask” action in the action space) manner. For example, 
tries to fit the reward function using human comparison on collected trajectories. However, due to its offline training nature, it’s hard to tell how human comparisons can benefit the agent directly during test time. In contrast, our agent learns the “ask” behavior in an on-policy manner (i.e our “ask” action is in the action space), which means that the agent can still actively ask for help from a human to complete the navigation. The second line of works rely on some pre-defined metrics and recovery heuristics to ask for human help. For example, encourages the agent to ask for demonstrations when the discrepancy of a state is high. [12, 26] compare the expected state of the world to the actual state, and if the robot asks for help, a human will repair the failure condition. However, recovery heuristics require human effort for every added argument and do not generalize well. Moreover, these models do not learn the timing for help, which may suffer from the drawback as in our MC agent.
Data augmentation has been shown to be an effective way to further boost the performance of the agent as pointed out in [8, 16, 25]. The common approach, borrowing inspiration from unsupervised machine translation [21, 14], is to sample some trajectories in the environment and use the speaker model  to generate fake natural language instructions. They benefit from their unsupervised nature since there is no need for human effort. However, the amount of the augmented data is very large, which is only feasible during the training phase assuming that we have a simulator. Others [29, 25] extended this strategy to the unseen/test environment, but the agent still has to explore the environment for a large number of turns, which is too energy-inefficient and slow for the real world. Moreover, the agent cannot learn user-specific characteristics using the generated and fake instructions. In contrast, the human-agent interaction we propose can be used to generate augmented data naturally and efficiently, which only needs human to answer a few questions. Since the instructions are generated by human, they are user-specific by nature. This is particularly useful when the goal is to let a human teach the agent with minimum effort.
3 Problem Formulation
Our task is the room-to-room navigation problem . The agent is given a natural language instruction with words, , where each is a word token. This instruction describes the navigation route , which is represented by a sequence of viewpoints, . We use the terms location and viewpoints interchangeably hereafter, because the simulator used in this work does not support continuous movement between different viewpoints, so the location of the agent must be on a viewpoint. The agent starts from the start location , going to the target location . Since determining when to stop is critical, we denote the final location that the agent decides to stop at as .
We formulate this navigation problem as a Markov Decision Process (MDP). At time step, the state of the agent is and the possible action space is . Formally, the state can be represented by the agent’s 36-discretized panoramic view , and its corresponding horizontal heading and vertical elevation :
where is between 1 to 36 and
is a feature vector derived from a ResNet
pretrained on imagenet.
For each of the 36 viewing angles, the environment provides the corresponding next navigable locations. The collection of these locations and a stop action to end the navigation constitute the action space , where each location is also represented by Eq. (1). Note that may not be the same for different time step . This number is determined by the simulator, since the agent may be blocked by obstacles at some of the 36 viewing angles. Once the agent picks an action , the agent moves to a new location with state . To solve the MDP, our baseline model follows the previous idea 
, which leverages imitation learning and RL techniques.
3.1 Interaction Ability
The agent may encounter ambiguities or get lost during the navigation. Therefore, it is desirable to endow the agent with the ability to interact with a human. Whenever the agent is confused, it sends out a signal “I am lost, please help me!” to the simulated user and asks for help. We assume the simulated user is an oracle , which means it knows where the agent is and returns the next shortest path action to
. However, this is only feasible in the simulated environment because real users make mistakes when giving step-wise instructions due to various reasons, including the complex 3D environment. To simulate this test time error, we assume that users make mistakes with probability. In this case, we calculate the angular differences between the shortest path action and the remaining ones, which are then normalized by a softmax function. We do not sample randomly because even when users make mistakes, we assume that they are more likely to provide actions close to the shortest path action. Formally, . The distorted answer is sampled from this distribution.
A bidirectional LSTM model is used to encode the instruction:
where is the concatenation of forward and backward output. The attentive panoramic view serves as the visual input:
with the weight calculated as:
where is the previous instruction-aware hidden state, and is a learnable matrix.
The decoder is an auto-regressive LSTM, where the input of every time step is the concatenation of the previous action and the attentive visual feature:
It is desirable that the agent focus on the right part of the instruction throughout the navigation process. Hence we calculate the attentive instruction:
The weight is calculated over all words of instruction:
is a learnable matrix. The hidden state is calculated as:
This instruction-aware hidden state is passed to next time step and used for computing the action distribution at current time step .
is calculated as the same as Eq. (1) but on the next navigable locations.
Whenever the agent goes to a viewpoint at time step , the environment computes , which is the shortest path from to the target location . The action going from to
is returned as the teacher action for supervision signal. For the supervised learning loss, we calculate the cross entropy between the teacher action and. For RL, the agent samples its actual action from the action distribution computed in Eq. (10). As in previous work, a +2 reward is given if the final location is within 3 meters distance to . Otherwise, the reward given is -2. We use the advantage actor critic (A2C)  as our RL algorithm.
We introduce two agents as in Figure 2 of different levels for the human-agent interaction along with the data augmentation strategy that makes use of the interaction data.
Model Confusion (MC)
In our MC model, the idea is that if the agent is confident of itself, then the predicted action distribution should be sharp. To quantify the confusion intuition, we first sort action probabilities in a decreasing order, and say an agent is confused if the difference of the top two actions is less than a threshold :
we provide the agent with the shortest path action. The threshold is used to control the degree of confusion.
In this simple mode, since the timing of whether to ask questions is not trained, we use the original action space without ask. Note that this method can be applied directly on pre-trained models described in sec. 4 during test time.
Action Space Augmentation (ASA)
We introduce as many actions as the types of questions the agent asks in addition to the original action space. In this work, the action space is enlarged by 1 to represent the ”What should I do next?” question, which is used to indicate whether to ask for help. Formally, the new action space is , where is the question indicator. If the agent chooses the action, it will remain in the same state and will give it the action on the shortest path route to . Each selected is associated with a negative reward , such that to ensure only necessary questions are asked. The action probability becomes:
We represent the action feature of ask by a vector of dimension same as consisting of all ones.
Enlarging the action space alone does not provide the desired question-asking behavior without the help from reward shaping , which is a useful technique in training RL algorithms. The difference of distance to the goal location between two consecutive steps can be used as the shaping reward . However, this will encourage the agent to bias toward the shortest path, but not follow the path indicated by the instruction. We propose to use an additional reward shaping term, the deviation shaping, to encourage the agent to follow the instruction. We call the original shaping reward as the distance shaping:
where is the distance between the current location and the target location. The proposed deviation shaping is:
is the viewpoint with the shortest distance to among the whole ground truth trajectory. It is straightforward to see that if the agent follows the ground truth trajectory better, the reward should be higher. Moreover, this shaping reward helps reduce the number of questions asked while preserving the same success rate. The reason is better alignment with ground truth trajectories leads to less ambiguities during the navigation. Without DEV, the agent will ask questions at every timestep. These two shaping rewards are summed together during the RL training. Concretely, the critic in the A2C algorithm is optimized at every timestep as:
is the discounted cumulative reward estimated by the monte carlo method. We note that the intrinsic reward in  shares a similar idea with our work. However, they train a sequence-to-sequence critic where the input is traversed trajectories and output is instruction decoding probabilities. This critic is used to calculate the cycle reconstruction reward at every timestep, which is much slower than our simple but effective deviation shaping.
|Agent Types||success rate||number of questions||move steps||ask percentage|
|Base Model w/o Interaction||0.551||0.471||-||-||5.09||4.95||-||-|
5.2 Data Augmentation
Our proposed interaction methods can be used to generate augmented data. Concretely, we execute the trained agent in test environment such that the agent may ask questions and receive answers from . The advantage of doing so is that by answering a few simple questions, the originally wrong trajectories might be corrected. These corrected trajectories serve as augmented data to prevent the agent from making same mistakes. The complete procedure is outlined in Figure 3. As long as users keep using the agent, we can collect more interaction data to fine-tune the agent in a continual learning scenario. The differences between our human-guided exploration, and the pre-exploration approach [29, 25] are highlighted in Table 2.
|instructions||real, user-specific||fake, user-agnostic|
|trajectories||not shortest, traversed||shortest, sampled|
6 Experiments and Discussions
We describe the R2R dataset used and the performance of our model. We propose an additional evaluation metric to measure the effectiveness of the interactive behavior in the VLN task. The impact of the imperfect oracle is also investigated. Finally, we compare our data augmentation method with previous work in terms of data efficiency.
6.1 R2R Data Statistics
In the R2R dataset , annotators are given image sequences of sampled shortest path trajectories, then they write down natural language instructions that best describe the paths. The dataset contains 21,567 navigation instructions with an average length of 29 words. The instruction vocabulary consists of around 3100 words due to the nature of the navigation task. The train set includes 61 scenes, with instructions split 14,025 train / 1,020 val seen. 11 scenes and 2,349 instructions are reserved for validating in unseen environments (unseen validation).
|success rate||number of questions|
6.2 Evaluation Metric
We evaluate our agent on the success rate and the number of steps taken which are the standard reported metrics  of the VLN task. An episode is a success if the navigation error is less than 3 meters. In addition to the standard metrics, we think it is necessary to propose a new evaluation metric to justify the effectiveness of the human-agent interaction, which is the percentage of total actions taken that are ask actions: where is the number of moving actions other than .
6.3 Interaction Results
In this setting, the agent is allowed to ask questions during test time with an oracle providing shortest path actions. For the ASA agent, we vary the penalty associated, , to each ask action. For the MC agent, we adjust the confusion threshold in Eq. (11). Then we test the agent on the unseen validation split. The results are in Table 1.
For the MC agent, the success rate and ask percentage become higher when we increase the threshold . The same observation applies to the ASA agent. Note that with a lower penalty , the agent is encouraged to ask more questions. At the same time, the success rate and the ask percentage both increase. If we compare the two agents, the performance is roughly the same regarding the success rate at the same ratio of the ask actions as shown in Figure 4. It is interesting to note that the move steps of the ASA agent increases while the MC agent remains the same. We hypothesize this is because ASA learns the ask behavior during training, so it tries to explore more to maximize the reward. As for MC, the threshold is applied only at test time, it does not learn the exploration behavior. While MC seems to be simpler and more effective than ASA, the ASA agent can adapt to errors in human-agent interactions more easily as we will see.
We adjust the distortion probability in sec. 3.1 to see how our agents adapt to different levels of noise. The results are in Table 3. The ASA agent asks more questions with the same success rate while the MC agent asks the same number of questions but the success rate drops linearly. The behavior of ASA is more ideal, since it can adjust dynamically to different levels of noise with the same success rate, which is particularly useful if the agent is a real product.
6.4 Human-Guided Exploration
It is desirable that the agent can further improve its navigation ability after several rounds of interactions. We split the unseen validation data to and . This is testing the real use case when a user brings the agent to a new environment () and teaches it through interaction for a while. After that, the agent is evaluated in the same environment with different instructions and paths () to see the effectiveness.
In this setting, and do not share the same trajectories and instructions but the house plans are shared. The reason for this splitting strategy is to compare fairly with the pre-exploration technique since [25, 29] ensured the augmented paths are different from those in the test environment. We run the same experiment with ASA and MC agents on and obtain the interaction history data, which is used to fine-tune the agent. Finally, we test the re-trained agent on without human-agent interaction. There are 2349 instructions in the unseen environment. We use the first 1500 as and the remaining 849 as .
In this setting, and may share the same trajectories but with different instructions. The motivation is to mimic the real-world scenario where customers buy the robot and put it in their houses. It is natural for a human to use different sentences to express the same goal. We randomly permute the unseen validation split and use the first 1500 as and the remaining 849 as .
The results of two settings are in Table 1. We use the best ASA () agent to generate augmented data. Better performance is observed on the random split setting because the agent has already seen some trajectories in . As for the disjoint setting, we can compare the results of ASA agent fairly with . With the same amount of augmented data (1500 on ), our method outperforms theirs by 5% (0.554 vs. 0.504).
Finally, we limit the fine-tune stage (stage 3 in Figure 3) to only supervised training instead of the mixture of supervised and RL training. This is to further reduce the time and energy consumption in test environment which in real-life would be a new customer’s home. The result is 0.514 vs. 0.483. While the performance of both methods drops due to the lack of exploration, ours still outperforms the baseline by 3%.
We vary the augmented data size of the pre-exploration approach to see its data efficiency. The results are in Figure 5. It is easy to see that human-guided exploration can reach the same performance by using much less data, demonstrating that our agent is more data-efficient. Moreover, this experiment highlights the importance of real instructions and trajectories.
In this paper, we propose an interactive learning framework to make the agent capable of resolving ambiguous situations by interacting with a human during learning or execution time. Two approaches are proposed, model confusion-based (MC) and RL with reward shaping (ASA). Experiment results demonstrate our agents can strike a balance between the task success rate and number of questions asked. Moreover, the RL agent can adapt dynamically to noise. Finally, we propose a strategy to fine-tune the agent using augmented data collected from human-agent interactions, which is more data-efficient and realistic than the previous method.
-  (2018) Vision-and-language navigation: interpreting visually-grounded navigation instructions in real environments. In , pp. 3674–3683. Cited by: Just Ask: An Interactive Learning Framework for Vision and Language Navigation, §1, §1, §2, §3, §6.1, §6.2.
-  (2018) Mapping navigation instructions to continuous control actions with position-visitation prediction. arXiv preprint arXiv:1811.04179. Cited by: §1, §2.
-  (2011) Learning to interpret natural language navigation instructions from observations. In Twenty-Fifth AAAI Conference on Artificial Intelligence, Cited by: §1, §2.
-  (2019) Touchdown: natural language navigation and spatial reasoning in visual street environments. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 12538–12547. Cited by: §1, §2.
-  (2017) Deep reinforcement learning from human preferences. In Advances in Neural Information Processing Systems, pp. 4299–4307. Cited by: §2.
-  (2009) ImageNet: A Large-Scale Hierarchical Image Database. In CVPR09, Cited by: §3.
-  (2003) Collaboration, dialogue, human-robot interaction. In Robotics Research, pp. 255–266. Cited by: §2.
-  (2018) Speaker-follower models for vision-and-language navigation. In Advances in Neural Information Processing Systems, pp. 3314–3325. Cited by: §1, §2, §2, §4.1.
-  (2008) Human–robot interaction: a survey. Foundations and Trends® in Human–Computer Interaction 1 (3), pp. 203–275. Cited by: §2.
-  (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: §3.
-  (2019) Tactical rewind: self-correction via backtracking in vision-and-language navigation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6741–6749. Cited by: §1, §2.
-  (2015) Recovering from failure by asking for help. Autonomous Robots 39 (3), pp. 347–362. Cited by: §2.
-  (2000) Actor-critic algorithms. In Advances in neural information processing systems, pp. 1008–1014. Cited by: §4.2.
-  (2017) Unsupervised machine translation using monolingual corpora only. arXiv preprint arXiv:1711.00043. Cited by: §2.
-  Human-robot interaction through spoken language dialogue. In Proceedings. 2000 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2000)(Cat. No. 00CH37113), Vol. 1, pp. 528–534. Cited by: §2.
-  (2019) Self-monitoring navigation agent via auxiliary progress estimation. External Links: Cited by: §2, §2.
-  (2019) The regretful agent: heuristic-aided navigation through progress estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6732–6740. Cited by: §1, §2.
-  (2006) Walk the talk: connecting language, knowledge, and action in route instructions. Def 2 (6), pp. 4. Cited by: §1, §2.
-  (1999) Policy invariance under reward transformations: theory and application to reward shaping. Cited by: §5.1.
-  (2019) Habitat: a platform for embodied ai research. arXiv preprint arXiv:1904.01201. Cited by: §2.
Improving neural machine translation models with monolingual data. arXiv preprint arXiv:1511.06709. Cited by: §2.
-  Human-robot interaction based on spoken natural language dialogue. Cited by: §2.
-  (2016) Exploration from demonstration for interactive reinforcement learning. In Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems, pp. 447–456. Cited by: §2.
-  (1998) Introduction to reinforcement learning. Vol. 2. Cited by: §5.1.
-  (2019) Learning to navigate unseen environments: back translation with environmental dropout. arXiv preprint arXiv:1904.04195. Cited by: §1, §2, §2, §3, §4.1, Figure 3, §5.2, Table 1, §6.4, §6.4.
-  (2014) Asking for help using inverse semantics. Cited by: §2.
-  (2019) Vision-and-dialog navigation. CoRR abs/1907.04957. External Links: Cited by: §2.
-  (2019) Improving grounded natural language understanding through human-robot dialog. arXiv preprint arXiv:1903.00122. Cited by: §2.
-  (2019) Reinforced cross-modal matching and self-supervised imitation learning for vision-language navigation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6629–6638. Cited by: §1, §2, §2, §5.1, §5.2, §6.4.