Translating Natural Language Instructions to Computer Programs for Robot Manipulation

12/26/2020 ∙ by Sagar Gubbi Venkatesh, et al. ∙ indian institute of science 0

It is highly desirable for robots that work alongside humans to be able to understand instructions in natural language. Existing language conditioned imitation learning methods predict the actuator commands from the image observation and the instruction text. Rather than directly predicting actuator commands, we propose translating the natural language instruction to a Python function which when executed queries the scene by accessing the output of the object detector and controls the robot to perform the specified task. This enables the use of non-differentiable modules such as a constraint solver when computing commands to the robot. Moreover, the labels in this setup are significantly more descriptive computer programs rather than teleoperated demonstrations. We show that the proposed method performs better than training a neural network to directly predict the robot actions.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

A generalist robot that can operate alongside humans and perform a variety of tasks in unconstrained environments is a long standing vision of robotic learning. It is highly desirable for such robots to be capable of understanding instructions in natural language from untrained users[18]. In this paper, we address the problem of programming robots using natural language.

Imitation learning has been used in recent years to learn end-to-end visuomotor policies that directly map pixels to robot actuator commands[14][33][31][21][9][28]

. However, this is not the only way neural networks can be used for controlling robots. It is also possible to use sensor data such as the camera feed to construct a vector space representation of the world and then to plan a path in this space

[2]. For example, an object detector can be used to find all the objects in the scene which can then be used to determine the robot motion necessary to move the objects to particular positions. Although this introduces rigidity in the representation of the world, the advantages of this approach include modularity (the object detector can be replaced without modifying the rest of the system) and interpretability (the output of the object detector can be examined separately).

Fig. 1: The robot receives an instruction in natural language, say “Topple the lock”, observes the scene through the camera, localizes the objects in front of it, translates the natural language instruction to a Python function block, and then executes the function.
Fig. 2: The instruction in natural language is translated into a Python function block, and the output of the object detector is passed as arguments to the function.

The majority of recent works on imitation learning have used some input device such as game controller[22], VR controller[33], visual odometry based 6-DoF position tracking using smartphones[20][19], space mouse[10], etc. to record experts teleoperating a robot. In this work, we take a different approach to collect expert demonstrations. We give a natural language instruction prompt and have experts write a Python function that controls the robot to accomplish the task specified in the instruction (Fig. 1). This function takes the output of an object detector as its argument and moves the end-effector of the robot arm to perform the specified task (Fig. 2). The dataset collected in this manner is used to train a neural network that takes a natural language instruction as input and predicts a Python function block which controls the robot when executed.

A few examples of the tasks we consider are: (a) Push the orange towards the apple, (b) Place the apple between the orange and the apple, (c) Pick up the orange and use it to push the bottle off the edge of the table. Although our robot does not use a force sensor and can only move the end-effector using position control, it is possible to expand the set of primitive instructions of the robot to include complex macro instructions such as peg-in-hole insert instruction that may invoke a separately trained policy network[10]. Our approach is most suitable for “gluing” together simpler commands to compose a more complex program. A potential application for our method is in augmenting teach pendants to accept instructions in natural language.

There are several advantages of having expert demonstrations in the form of program code. One is that the expert program can invoke complex subroutines such as a constraint solver. It can be difficult to train an end-to-end neural network to copy the behavior of such complex modules. The other advantage is that the intention of the expert is clearer and less ambiguous in the program representation than in teleoperated demonstrations. For example, to “push the orange off the table”, the program to perform this task clearly indicates the robot motion for different possible positions of the orange, whereas, we would need many more teleoperated demonstrations each corresponding to a different position of the object to be able to train a neural network to reliably copy the expert behavior. Finally, the program representation is more interpretable and amenable to analysis before it is executed.

Our contributions are:

  • We propose an imitation learning setup where the expert demonstrations are in the form of program code and use a neural translation model to translate instructions in English to Python code that controls the robot.

  • We show that the proposed method performs better than directly mapping natural language instructions to actuation commands.

The rest of this paper is organised as follows. In the following section, related work is discussed. Section 3 defines the problem statement. In Section 4, the neural network architecture that we use is described in detail. Experimental results are discussed in Section 5, and Section 6 concludes the paper.

Ii Related Work

Several recent papers have demonstrated that it is possible to learn visuomotor skills from human demonstrations[9][4][30][10][21][22][28]. Input devices such as VR controller[33], space mouse[10], visual odometry for 6-DoF position tracking using smartphones[20][19], etc. have been used to gather expert demonstrations. What is common to all of these approaches is that some input device is used to enable human experts to teleoperate the robot. In this work, we deviate from that approach by having experts indirectly control the robot by writing Python programs.

Understanding natural language in the context of the visual scene of the robot has been addressed by several papers. In [26], a robot system to pick and place common objects is built where the object is inferred from the input image and grounded language expressions. The problem of referring to objects in an unambiguous manner is addressed in [7]. Although there may be ambiguities in the natural language input, the spatial relationships between objects are used to disambiguate the meaning and resolve the object being referred to. Understanding instructions provided in spoken language with incomplete information based on the context of the input image and common sense reasoning is addressed in [5]. The authors in [15] propose a synthetic dataset for visual question answering to debug and understand weaknesses in different grounded natural language reasoning models. In [3], the Blocks dataset is proposed. This dataset contains instructions to move blocks such as ”Move block 6 north of block 8” along with the positions of all the blocks in the scene before and after the instruction has been executed. Our work also has an emphasis on spatial reasoning, but we go beyond moving around a single block or object.

Unlike the above mentioned works, the Learning from Play (LfP) approach in [18] is end-to-end imitation learning with the neural network directly controlling the actuators. This builds on goal-based imitation learning where a neural network prediction is conditioned on the current image observation and the desired target image. Rather than using the target image, [18] replaces it with a latent vector derived from the natural language input. In this paper, we use the more traditional imitation learning approach and have experts translate natural language instructions into Python code.

In Concept2Robot[25]

, a large dataset of human demonstrations (not teleoperated) is used to learn a video classifier that predicts the task being performed in the video. This classifier is then used as a reward function for reinforcement learning to train another network that takes the natural language instruction and predicts the desired goal pose. In this work, we do not use reinforcement learning or a reward function and instead use the programs written by the expert in a fully supervised learning setting.

Much attention is devoted to object detection in the computer vision literature

[12][23][24][27][29]. Although end-to-end imitation learning does not use object detection, it is also possible to use a pipelined approach where object detection is one module. For example, in [32], the pick-and-place task is performed by picking up the object at a grasp point and then bringing it near the camera for classifying to which bin the object should be placed in. In this paper, we use a fully convolutional object detector inspired by [23] to detect the positions and sizes of all the objects in the scene.

The problem of answering queries in natural language using data from a table is addressed in [11]. There are broadly two approaches to this problem. One way is to approach this as a semantic parsing problem and to generate a logical form or a SQL query from the natural language input. The other way is to process the natural language instruction along with the contents of the table to directly predict the answer. The latter approach subsumes the process of running the query into the neural network itself. In this paper, we generate Python function blocks rather than SQL statements from natural language.

In [1], the authors propose generating code from documentation strings. In [8], a pre-trained model for programming languages is proposed. A “transpiler” that translates code from one language to another is proposed in [13]

. Although this paper also proposes generating program code from natural language, the end goal of controlling the robot is different. As a result, the evaluation metrics and baselines also differ. Moreover, our primary objective in this work is not to improve on code generation methods, but to show that generating code can outperform direct prediction of actuator commands.

Iii Task Description

We consider two different tasks where the task is specified using natural language.

Iii-a Arrange task

Fig. 3: A sample function for the arrange task that takes the width and height of all the objects and determines the positions of the objects on table as specified by the natural language instruction (which is shown in the docstring). The Cassowary constraint solver is used to determine the positions of the objects. Note that the extents of the table on which the objects are to be placed is normalized to be in the range .
Fig. 4: A sample function for arrange task that uses the output of the constraint solver before deciding to add additional constraints.

This task involves taking objects from a tray and placing them at different positions on the table. The instruction in natural language along with the width and height of all the objects are the inputs and the goal is to predict the positions of the objects on the table. The motion planning to pick up the object from the tray and place it at the specified location is performed separately (this is not learnt).

Figures 3 and 4 show sample programs that compute the positions of the objects for the given natural language instruction. The program uses the Cassowary constraint solver (which uses the simplex method) to declaratively specify constraints for the positions of the objects. Note that it’s not entirely declarative and the program can access the intermediate solution before declaring additional constraints (Fig. 4). After the program is executed, the positions of all the objects determined by the constraint solver is used to plan the pick-and-place motion of the robot arm.

Iii-B Manipulation task

Fig. 5: A sample function for the manipulation task that takes the positions and sizes of all the objects on the table and determines the sequence of robot actions to accomplish the goal specified by the natural language instruction (which is shown in the docstring). The program can control the robot by specifying the end-effector position and the suction gripper state (on/off).
Fig. 6: A sample function for manipulation task that uses trigonometric functions in the Python standard library to compute the trajectory of the end-effector of the robot.

This task involves manipulating objects on the table as specified by the natural language instruction. Typical tasks involve reaching for an object, pushing an object somewhere, and picking-and-placing an object. To control the robot, the action space is (a) to move the end effector of the robot to the specified position (x, y, z, r), and (b) to control the suction gripper (on/off). The robot can be controlled by emitting a sequence of end effector poses and grip commands. An object detector makes available the positions and sizes of all the objects. The goal is to take the positions and sizes of all the objects on the table and to emit a sequence of end-effector positions and gripper on/off commands.

Figures 5 and 6 show sample programs that control the robot to accomplish the task specified by the natural language instruction. Unlike the previous task, the objects are already on the table. Moreover, the program must not merely specify the desired state, but it must also directly control the robot to get to the desired state. So, the current positions of the objects are used to compute the appropriate actions.

Iv Network Architecture

The proposed architecture is shown in Fig. 7

. The natural language instruction is taken as input, and the neural machine translation model generates the Python program that performs the task specified by the instruction.

Fig. 7: The proposed neural machine translation architecture. The recurrent cells used are a single layer of LSTM cells with hidden state dimension of 1024. In the decoder, the context vector is concatenated with

and passed through fully connected layers (FC1024-ReLU-FC100-Softmax) to predict the target sequence.

It uses an LSTM based neural machine translation model with attention[17]. Unlike most language vision models, the neural network does not take the image observation as an input. Rather, the program generated by the network accesses the attributes of the objects detected and controls the robot based on that.

The input natural language instruction is tokenized, and the embeddings for the tokens are obtained using a pre-trained BERT model[6]. Note that the BERT layers are frozen and remain unchanged during training. The input sequence embeddings are processed by an encoder LSTM with hidden states . After all the input tokens are processed, a decoder LSTM predicts the target sequence that is used to contruct the Python function body.

At each step of the decoder, the decoder state is used to attend to the input states and infer the context vector that is used to predict the output .

The variable length alignment vector of size equal to the number of steps in the input sequence is obtained by comparing the decoder hidden state with each of encoder hidden states :

(1)
(2)

The context vector is computed as the weighted average of the hidden states of the encoder :

(3)

The context vector and the decoder state are concatenated and passed through fully connected layers to predict the target sequence token .

V Results

We first evaluate the proposed approach in a simulated environment. Subsequently, we discuss the performance on a real robot arm.

V-a Datasets

V-A1 Arrange Dataset

The arrange task involves arranging objects on the table as specified by the instruction in natural language. The object positions may be specified as absolute positions or in terms relative to other objects placed on the table. For this task, we have collected the arrange dataset, a parallel corpus of instructions in English and Python functions. The function takes the object sizes as arguments and sets the position of the objects as indicated in the instruction. Some examples are shown in Figs. 3 and 4. Note that in addition to the object sizes, the function is also given the Cassowary linear constraint solver111The Cassowary algorithm is used by Apple UIKit to place UI elements in GUIs to specify the positions of objects as constraints to be solved. The arrange dataset has training / development / test split of 102 / 11 / 11 samples.

We also execute each program in the corpus for 20 different random initializations of the sizes of the objects to obtain the positions of the objects given those sizes. This secondary dataset is used for fair comparison with baseline models that directly predict the positions of the objects given the instruction and sizes of the objects.

V-A2 Manipulation Dataset

This task involves manipulating objects already present on the table as specified by the instruction in natural language. Typical manipulation tasks in this dataset are reaching for an object, pushing an object somewhere, and picking-and-placing an object. For this task, we have collected the manipulation dataset, a parallel corpus of instructions in English and Python functions. The function takes the positions and sizes of all the objects on the table and controls the robot through an API that allows it to specify a sequence of end-effector poses and gripper states (on/off). A few examples are shown in Figs. 5 and 6. The manipulation dataset has training / development / test split of 122 / 12 / 12.

For each sample in the manipulation corpus, the Python program is executed for 20 random initializations of the positions and sizes of the objects on table and with a mock robot that records the sequence of end-effector positions and gripper state changes. This is used for fair comparison with baseline models that directly predict the sequence of end-effector poses given the instruction text and the sizes and positions of the objects.

Fig. 8: Sample generated program (incorrect) from the test set for the arrange task. The input instruction is in the docstring. The underlined code is incorrect. The neural network seems to have overfit for instructions with multiple phrases, and the generated code resembles a sample in the training set.
Fig. 9: Sample generated program (correct) from the test set for the manipulation task. The input instruction is in the docstring. The program moves the end-effector to push the object to the intended location.
Fig. 10: Sample generated program (incorrect) from the test set for the manipulation task. The input instruction is in the docstring. The underlined code is incorrect. The network seems to have overfit since the incorrect generated program resembles a sample in the training set.
Fig. 11: Sample generated program (correct) from the test set for the manipulation task. The input instruction is in the docstring. The program successfully controls the robot to perform the task.

V-B Baselines

For the arrange dataset, we use LSTM+FC layers[3] as the baseline. The LSTM encodes the instruction text into a fixed size vector. This is concatenated with the sizes of all the objects and passed through several fully connected layers to directly predict the positions of all the objects.

For the manipulation dataset, we use an encoder LSTM to encode the instruction and a decoder LSTM that, at every timestep, concatenates the decoder state and the attention context vector at that timestep along with the positions and sizes of all the objects on the table, and passes this concatenated vector through fully connected layers to predict the end-effector pose and grip state[21].

V-C Evaluation Metric

We use accuracy as the evaluation metric. Each of the predicted programs are executed 20 times with randomized object positions and sizes. For the arrange dataset, we treat the prediction to be “correct” if the absolute difference between the predicted position and ground truth position is less than 10% of the width of the table (on both x and y axes). For the manipulation dataset, the prediction is considered accurate if the absolute difference between the predicted trajectory and the ground truth trajectory is less than 10% of the width of the table at every timestep. This is merely an easy-to-evaluate proxy for whether the robot is truly accomplishing the task in the instruction. A more thorough evaluation that properly tests whether the task specified was performed successfully is conducted on a few samples with a real robot arm (Section V-E).

V-D Discussion of Results

Fig. 12: Visualization of attention over the natural language instruction when predicting the token denoted by “?”. When predicting the y-coordinate of the move function call, the attention layer is focusing on the “bottom” edge of the table to emit the “-1” token.
Fig. 13: Visualization of attention over the natural language instruction when predicting the token denoted by “?”. When predicting the y-coordinate of the move function call, the attention layer is focusing on the “orange” in the input instruction to emit the “orange” token.
Model Arrange Task Manipulation Task
Baseline 14.2% 9%
Proposed Seq2Seq model 80.8% 93.2%
TABLE I: Comparison of the performance of baselines (Section V-B) and the proposed model (Section IV)

Table I compares the results of the proposed method with the baselines. All the architectures are trained with the Adam optimizer with learning rate 1e-3. For both tasks, the proposed method of generating a Python program and then executing that program outperforms the baselines which directly regress the object positions (arrange dataset) or end-effector poses (manipulation dataset).

Figures 8-11 show a few programs generated from the test set. Figures 12 and 13 show the attention weights for different tokens of the input instruction text when predicting a particular output token. We see that the attention mechanism is focusing on the relevant part of the instruction when predicting the program.

We have also experimented with replacing words in the instruction text with synonyms. We found that replacing “put” with “keep”, “place”, and “put down” always resulted in correct predictions. Likewise, we found that removing the word “the” does not change the output. Similarly, replacing “right-top corner” with only “right-top” or “top right” results in no changes to predicted sequence. However, substituting the words for objects, such as replacing “bottle” with “flask” or “pitcher” and “cup” with “chalice”, caused incorrect predictions.

We also found that the generalization worsens as the number of phrases in the input sequence increases (Figs. 8 and 10). There are only a few samples in the training set with 4 phrases (such as “place the orange at the bottom-right, the apple at the top-right, banana at the center, and the lemon to the right of the apple”). The model overfits on such long phrases and gives incorrect predictions that resemble the training data. However, if the input instruction is split at the commas into multiple short phrases, the model correctly predicts the positions for each of the phrases. But, this is not a viable solution because there are many instructions where such a split is not possible since the latter phrases refer to objects in the former (for example, “place the apple at the center, the orange at the top-right, and the banana in between them”).

V-E Demonstration on the Robot Arm

We demonstrate the complete pipeline with a Dobot Magician (Fig. 1). Common objects such as fruits, cups, magnets, etc. are used. An object detector[16] is trained to detect the position and size of these objects, but the depth (tallness from the table surface) of the object is measured beforehand and hard coded. The camera feed from an overhead camera is passed through the object detector whose output is passed as arguments to the Python function generated by the proposed method from the natural language instruction, and the function is executed. Out of 25 trials, 19 were successful with the robot accomplishing the task. All the failures were due to inaccuracies in the object detector or the suction gripper failing to pick up the object. A video of the robot in operation is available at: https://youtu.be/TtoYE3EsDkc

Vi Conclusions

Computer programs are a way to precisely specify tasks. We find that programs are rich representations of the expert demonstrations and are beneficial for learning to control robots. We also showed that translating natural language instructions to computer programs outperforms directly predicting the robot actuator commands. Moreoever, the predicted programs are interpretable and easier to analyse than end-to-end neural networks that directly predict robot actions. Although this approach is necessarily constrained to those problems for which the solution can easily be expressed as a program, the proposed approach may find use in augmenting teach pendants for industrial robots to generate programs based on verbal instructions.

Acknowledgment

We thank Mohammed Rizvi for his suggestions. We also thank the Robert Bosch Center for Cyber-Physical Systems for funding support.

References

  • [1] A. V. M. Barone and R. Sennrich (2017) A parallel corpus of python functions and documentation strings for automated code documentation and code generation. arXiv preprint arXiv:1707.02275. Cited by: §II.
  • [2] W. Bejjani, M. R. Dogar, and M. Leonetti (2019) Learning physics-based manipulation in clutter: combining image-based generalization and look-ahead planning. In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol. , pp. 6562–6569. Cited by: §I.
  • [3] Y. Bisk, D. Yuret, and D. Marcu (2016) Natural language communication with robots. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 751–761. Cited by: §II, §V-B.
  • [4] M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang, et al. (2016) End to end learning for self-driving cars. arXiv preprint arXiv:1604.07316. Cited by: §II.
  • [5] H. Chen, H. Tan, A. Kuntz, M. Bansal, and R. Alterovitz (2020) Enabling robots to understand incomplete natural language instructions using commonsense reasoning. In 2020 IEEE International Conference on Robotics and Automation (ICRA), pp. 1963–1969. Cited by: §II.
  • [6] J. Devlin, M. Chang, K. Lee, and K. Toutanova (2018) Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Cited by: §IV.
  • [7] F. I. Doğan, S. Kalkan, and I. Leite (2019) Learning to generate unambiguous spatial referring expressions for real-world environments. arXiv preprint arXiv:1904.07165. Cited by: §II.
  • [8] Z. Feng, D. Guo, D. Tang, N. Duan, X. Feng, M. Gong, L. Shou, B. Qin, T. Liu, D. Jiang, et al. (2020) Codebert: a pre-trained model for programming and natural languages. arXiv preprint arXiv:2002.08155. Cited by: §II.
  • [9] A. Giusti, J. Guzzi, D. C. Cireşan, F. He, J. P. Rodríguez, F. Fontana, M. Faessler, C. Forster, J. Schmidhuber, G. Di Caro, et al. (2015)

    A machine learning approach to visual perception of forest trails for mobile robots

    .
    IEEE Robotics and Automation Letters 1 (2), pp. 661–667. Cited by: §I, §II.
  • [10] S. Gubbi, S. Kolathaya, and B. Amrutur (2020) Imitation learning for high precision peg-in-hole tasks. In 2020 6th International Conference on Control, Automation and Robotics (ICCAR), pp. 368–372. Cited by: §I, §I, §II.
  • [11] J. Herzig, P. K. Nowak, T. Müller, F. Piccinno, and J. M. Eisenschlos (2020) TAPAS: weakly supervised table parsing via pre-training. arXiv preprint arXiv:2004.02349. Cited by: §II.
  • [12] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam (2017)

    Mobilenets: efficient convolutional neural networks for mobile vision applications

    .
    arXiv preprint arXiv:1704.04861. Cited by: §II.
  • [13] M. Lachaux, B. Roziere, L. Chanussot, and G. Lample (2020) Unsupervised translation of programming languages. arXiv preprint arXiv:2006.03511. Cited by: §II.
  • [14] S. Levine, C. Finn, T. Darrell, and P. Abbeel (2016) End-to-end training of deep visuomotor policies. The Journal of Machine Learning Research 17 (1), pp. 1334–1373. Cited by: §I.
  • [15] R. Liu, C. Liu, Y. Bai, and A. L. Yuille (2019) Clevr-ref+: diagnosing visual reasoning with referring expressions. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    ,
    pp. 4185–4194. Cited by: §II.
  • [16] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Fu, and A. C. Berg (2016) Ssd: single shot multibox detector. In European conference on computer vision, pp. 21–37. Cited by: §V-E.
  • [17] M. Luong, H. Pham, and C. D. Manning (2015) Effective approaches to attention-based neural machine translation. arXiv preprint arXiv:1508.04025. Cited by: §IV.
  • [18] C. Lynch and P. Sermanet (2020) Grounding language in play. arXiv preprint arXiv:2005.07648. Cited by: §I, §II.
  • [19] A. Mandlekar, J. Booher, M. Spero, A. Tung, A. Gupta, Y. Zhu, A. Garg, S. Savarese, and L. Fei-Fei (2019) Scaling robot supervision to hundreds of hours with roboturk: robotic manipulation dataset through human reasoning and dexterity. arXiv preprint arXiv:1911.04052. Cited by: §I, §II.
  • [20] A. Mandlekar, Y. Zhu, A. Garg, J. Booher, M. Spero, A. Tung, J. Gao, J. Emmons, A. Gupta, E. Orbay, et al. (2018) Roboturk: a crowdsourcing platform for robotic skill learning through imitation. arXiv preprint arXiv:1811.02790. Cited by: §I, §II.
  • [21] R. Rahmatizadeh, P. Abolghasemi, A. Behal, and L. Bölöni (2018) From virtual demonstration to real-world manipulation using lstm and mdn. In

    Thirty-Second AAAI Conference on Artificial Intelligence

    ,
    Cited by: §I, §II, §V-B.
  • [22] R. Rahmatizadeh, P. Abolghasemi, L. Bölöni, and S. Levine (2018) Vision-based multi-task manipulation for inexpensive robots using end-to-end learning from demonstration. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 3758–3765. Cited by: §I, §II.
  • [23] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi (2016) You only look once: unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 779–788. Cited by: §II.
  • [24] S. Ren, K. He, R. Girshick, and J. Sun (2015) Faster r-cnn: towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pp. 91–99. Cited by: §II.
  • [25] L. Shao, T. Migimatsu, Q. Zhang, K. Yang, and J. Bohg (2020) Concept2Robot: learning manipulation concepts from instructions and human demonstrations. In Robotics: Science and Systems, Cited by: §II.
  • [26] M. Shridhar and D. Hsu (2018) Interactive visual grounding of referring expressions for human-robot interaction. arXiv preprint arXiv:1806.03831. Cited by: §II.
  • [27] S. G. Venkatesh and B. Amrutur (2019) One-shot object localization using learnt visual cues via siamese networks. In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 6700–6705. Cited by: §II.
  • [28] S. G. Venkatesh, R. Upadrashta, S. Kolathaya, and B. Amrutur (2020) Multi-instance aware localization for end-to-end imitation learning. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Cited by: §I, §II.
  • [29] S. G. Venkatesh, R. Upadrashta, S. Kolathaya, and B. Amrutur (2020) Teaching robots novel objects by pointing at them. In 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), pp. 1101–1106. Cited by: §II.
  • [30] T. Yu, C. Finn, A. Xie, S. Dasari, T. Zhang, P. Abbeel, and S. Levine (2018) One-shot imitation from observing humans via domain-adaptive meta-learning. External Links: 1802.01557 Cited by: §II.
  • [31] T. Yu, C. Finn, A. Xie, S. Dasari, T. Zhang, P. Abbeel, and S. Levine (2018) One-shot imitation from observing humans via domain-adaptive meta-learning. arXiv preprint arXiv:1802.01557. Cited by: §I.
  • [32] A. Zeng, S. Song, K. Yu, E. Donlon, F. R. Hogan, M. Bauza, D. Ma, O. Taylor, M. Liu, E. Romo, et al. (2018) Robotic pick-and-place of novel objects in clutter with multi-affordance grasping and cross-domain image matching. In 2018 IEEE international conference on robotics and automation (ICRA), pp. 1–8. Cited by: §II.
  • [33] T. Zhang, Z. McCarthy, O. Jow, D. Lee, X. Chen, K. Goldberg, and P. Abbeel (2018) Deep imitation learning for complex manipulation tasks from virtual reality teleoperation. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 1–8. Cited by: §I, §I, §II.