Supportive Actions for Manipulation in Human-Robot Coworker Teams

05/02/2020 ∙ by Shray Bansal, et al. ∙ Georgia Institute of Technology 1

The increasing presence of robots alongside humans, such as in human-robot teams in manufacturing, gives rise to research questions about the kind of behaviors people prefer in their robot counterparts. We term actions that support interaction by reducing future interference with others as supportive robot actions and investigate their utility in a co-located manipulation scenario. We compare two robot modes in a shared table pick-and-place task: (1) Task-oriented: the robot only takes actions to further its own task objective and (2) Supportive: the robot sometimes prefers supportive actions to task-oriented ones when they reduce future goal-conflicts. Our experiments in simulation, using a simplified human model, reveal that supportive actions reduce the interference between agents, especially in more difficult tasks, but also cause the robot to take longer to complete the task. We implemented these modes on a physical robot in a user study where a human and a robot perform object placement on a shared table. Our results show that a supportive robot was perceived as a more favorable coworker by the human and also reduced interference with the human in the more difficult of two scenarios. However, it also took longer to complete the task highlighting an interesting trade-off between task-efficiency and human-preference that needs to be considered before designing robot behavior for close-proximity manipulation scenarios.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 4

page 5

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Despite the continued growth of industrial robot sales [10], many assembly tasks are still performed manually in major industries [20]. A vision for the future of manufacturing involves robots working alongside human coworkers on tasks that exploit the respective strengths of both. Surveys identify interaction with co-workers as one of the most important job criteria for human workers [21]. We introduce interaction-supporting actions that aim to improve the coworker experience in human-robot co-located manipulation. We implement these in a close-proximity manipulation task to understand the impact on task performance and the coworker perception as compared to a robot focused solely on completing its task.

Fig. 1: An example scene from our co-located manipulation scenario. The robot’s goal is to place all the red blocks into the row closest to itself, and the human participant’s goal is to do the same for the yellow blocks.

We term actions necessary for an agent to complete their task in the absence of other agents as task-oriented. We define supportive actions as actions that support the interaction by reducing potential interference with other agents but are not necessary for task completion. For e.g., when resetting a chessboard, for the agent playing black, actions that move the black pieces to their positions are task-oriented and actions moving the white pieces towards the other player can be supportive. Although the supportive actions help the other agent, they are not altruistic as the agent hopes to benefit from the reduced interference they cause. Humans also perform supportive actions, perhaps due to their modeling of others as intentional agents that plan for mutual benefit [17, 8], or their expectations of reciprocity [4], etc.

Our task is inspired by other close-proximity human-robot interaction (HRI) manipulation studies [5, 14]. It involves two agents, a human and a robot, situated across a table scattered with color-coded blocks, each aiming to bring the blocks of their assigned color quickly back to their side (Fig.  1). The agents have intentionally been assigned separate goals without a direct incentive for cooperation and the shared table is expected to induce interference. We focus on high-level decision-making and design supportive actions that proactively avoid collision by modifying the goal configuration of the other agent by moving their blocks. In our experiments, the robot operates in one of two modes: (1) Task-oriented, where the robot takes only task-oriented actions, and (2) Supportive mode where the robot takes both supportive and task-oriented actions depending on the situation. We hypothesize that the supportive mode would reduce interference and lead to a better human experience, both in terms of subjective and objective measures. We test this in simulation with a simplified human model and verify it with a user study on a physical robot.

Our main contributions are the introduction of supportive actions in a human-robot collaborative manipulation task, simulation experiments and user study experiments that justify the use of such actions, and the identification of a trade-off between operational and usability metrics when the robot is designed to deliberately take supportive actions.

After reviewing the literature in Sec. II, we formulate the problem in Sec. III and describe our methodology in Sec. IV. We first experiment in simulation in Sec. V to design the supportive actions and formulate hypotheses in Sec. VI. We present the implementation and user study details in Sections VIII and VII, respectively. We analyze results in Sec. IX, discuss them in Sec. X and conclude in Sec. XI.

Ii Related Work

Human-robot interaction (HRI) includes collaborative scenarios where agents work to achieve a common goal and others where agents have separate, sometimes competing, objectives. Our goal is to study scenarios where humans and robots work alongside each other, and interaction arises from conflict due to shared resources (such as space).

In manipulation, human-robot co-presence focus on scenarios where the human is either treated as an obstacle to be avoided [14, 13], or, as a leader [7] to be assisted. In the former, the human’s goal is either not considered at all, or only used to make predictions to guide a more pro-active obstacle avoidance, and in the latter, the robot shares the human’s goal. Our task involves separate goals for the two and we consider the question of whether the robot should take actions that support the interaction without direct task completion benefits for the robot.

For assisting the human by anticipating their actions, Hawkins et al.  [7] exploit task structure and Nikolaidis et al.  [15] perform online adaptation to user preferences. Similarly to us, both of these approaches focus on the high-level decision-making aspect of the task. Cherubini et al.  [2] plan low-level robot actions to successfully reduce human workload for automotive manufacturing and Koppula et al.  [11] perform assistive actions adapted to predictions made by learning model of the human’s activities.

These methods have helped improve collaborative task performance, but the inherent assumption is that the role assigned to the robot should be to assist and/or stay out of the way. These simplify the robot’s decision-making to favor actions that directly further the human’s objective. However, the types of roles and interaction modes in mixed human-robot teams are richer, as shown by Gombolay et al.  [6].

Similar to our task, Gabler et al.  [5] plan robot actions in a close-proximity human-robot collaborative scenario. Although both agents have a common goal in their task, they utilize a game-theoretic model that considers the human as an agent with a different utility. They use this goal-driven behavior while planning to increase joint task-efficiency. We design Supportive actions to influence human behavior by modifying goal configuration by moving their blocks. This helps to avoid future interference. While their model also considers the robot’s influence on the human, they only use it to find an optimal ordering of existing task-oriented actions.

Iii Problem Formulation

We design a pair of pick-and-place tasks on a table shared by a human and a robot and represent it as a two-agent game. The table has two sets of blocks distinguished by color, we assign one set to the robot and the other to the human . We draw a grid on the table and place each block in a single cell. We define this cell as the block’s location and a cell near its assigned agent as its destination, . A state is a configuration of blocks on the grid, . Fig.2 shows a grid configuration where and and .

Fig. 2: An example board configuration consisting of blocks () for the robot (red) and the human (yellow). Robot actions () are depicted by the arrows. are task-oriented actions while are supportive and is a more useful supportive action because it reduces potential interference when reaching for block .

In this task, an action can move at most one block to a different location. For instance, in Fig. 2, , moves block from its location to the goal. We also allow idle actions that do not move any blocks. Both agents are instructed to start performing actions simultaneously, and so, if one agent finishes their action early, they have to wait for the other to complete their action before starting to perform the next one. We assign each agent the goal to take actions that lead to a state, , where each of their blocks is in its destination cell, in minimum time. Their goal only depends upon their blocks, the locations of other blocks is not directly relevant.

Iv Method

We first explain how to construct the sets of task-oriented and supportive actions and then describe two decision-making strategies used by the robot to perform the task.

Iv-a Action Sets

We define two action sets for the robot to use: task-oriented, and supportive, . includes actions that each move a robot block to its destination,

(1)

includes the supportive actions. We define a supportive action, , for a human block, , that is closer to the robot. Then we set the closest empty cell to it, which is also nearer to the human, as its destination . This way, we balance the cost of the additional action with reducing the potential for interference while favoring the human’s preference of retrieving objects near them. For e.g., in Fig. 2, and .

Iv-B Task-Oriented Robot

The task-oriented baseline randomly samples an action from the task-oriented set at a given state, . The goal is to complete the task with the fewest actions. It chooses randomly because all task-oriented actions are necessary for reaching the goal state.

Iv-C Supportive Robot

The Supportive robot chooses actions using a policy, containing actions from task-oriented and supportive

sets. This policy is an ordered set of actions ranked by their priority and is defined by the user before starting the task. Here, we describe the heuristical approach we took to create

for the task with the goal to reflect the utility of supportive actions.

We initialize as an empty list and populate it by iterating over the following rules until no new action is generated. We also initialize B to a list of all the blocks in the grid.

  1. Return empty if B is empty.

  2. If a block exists such that has no human block that might cause a conflict when reaching for it, then pop and return a task-oriented action for it.

  3. Else, find a supportive action from B that has conflict with the most robot objects in B.

This approach is designed to produce actions that reduce the probability of collision between the human and the robot while trying the minimize the task completion time. It is applicable to any block configuration.

Given a predefined policy, , the robot checks the list in order and executes the first action that is feasible in the current state . If no feasible action is found, it defaults to sampling available task-oriented actions until the goal is reached. We assign a fixed list to to ensure that the participants observe similar behavior from the robot every trial when studying the effect of supportive actions.

Fig.2 depicts an example task with four blocks, task-oriented actions, , and supportive actions, . The policy, , for this scenario is . Here, a task-oriented action, , is included first because of the lack of potential goal conflict of block ; then the robot takes a supportive action, , to reduce the potential interference of block ; and finally, it completes the task with the last task-oriented action . The planner ignores supportive action, because block causes no potential interference with the robot’s blocks.

V Simulated Experiment

We simulate a scenario with two -link robot arms performing pick-and-place actions in D (Fig. 3). Our goal is to observe the effect of supportive

actions in an idealized setting, without the variance introduced by the participants, or errors in sensing and actuation.

Fig. 3: The simulated 2D environment with two arms, one is a simulation of the human and the other is controlled by the robot policy.

We develop an OpenRAVE [3] environment with blocks of two colors scattered on a table. We assign each arm six blocks of the same color. The goal for each arm was to bring blocks of their assigned color to the destination area near the arm, highlighted in Fig. 3. We define a grid on the table and place the blocks into these cells according to two configurations, easy and hard, as shown in Fig. 4. We consider one arm as the robot and the other one as a simulated human. The simulated human chooses task-oriented actions while prioritizing closer blocks. We experiment with the robot following both task-oriented and supportive algorithms from Sec. IV. The RRT* [12] implementation in OMPL [18] is used to plan joint-space trajectories.

Scenario Robot Mode Task Time (s) Safety Stops
Easy Task-Oriented
Supportive
Hard Task-Oriented
Supportive
TABLE I: Simulation Results

Results. The two scenarios and two robot modes make four experimental conditions. We run each of them times and present the averaged results in Tab. I. The time taken to complete the task by the slowest agent is termed Task Time. We also record the number of times the simulated robot was stopped during the interaction to prevent a collision and term it Safety Stops. The robot stops and waits for the simulated human to move a threshold distance away when this happens while the human is free to move. We find task completion time to be higher for the supportive robot but the safety stops are lower in Tab. I. A larger effect due to supportive actions is observed for both metrics in the hard scenario. The supportive robot is always slower than the human and although the additional actions cause a longer task time they also reduced goal conflict leading to less than safety stops in the hard scenario.

(a) Easy
(b) Hard
Fig. 4: Layout of the easy (left) and hard (right) block configurations, viewed so the human is seated below row A. The human places yellow blocks on the numbers below row A, whereas the robot is across the table and placing red blocks in Row G. The difficulty is due to the conflict caused by the robot and human reaching for the same space. This conflict exists more in (b) since most of the yellow blocks are in front of the robot’s.

Vi Hypotheses

Following simulation results, we anticipate the robot’s behavior and the initial block configuration to affect collaborative performance. We formulate the following hypotheses to test on a user study with a physical robot.

  1. [label=H0]

  2. Supportive actions will reduce the interference between the agents. In particular, we expect the supportive actions to reduce the safety stops occurring in the interaction, especially for difficult scenarios.

  3. Supportive actions will reduce the human’s time to complete the task. We expect people to complete the task faster when interacting with the supportive robot leading to more idle time, especially for difficult scenarios.

  4. Supportive actions will have a positive effect on the subjective measures of task performance. We expect that participants will prefer the supportive robot as a coworker, especially for difficult scenarios.

  5. Changing the initial block configuration would affect both the subjective and objective measures. In particular, we expect that participants will find the task more difficult to perform if the initial block configuration includes more goal conflicts. We also expect the effect of supportive actions to be more prevalent in difficult scenarios, in general.

Vii User Study Design

We conduct a user study to test the effect of the supportive actions. The study was approved by Monash University’s Ethics Review Board.

Vii-a Independent Variables

We manipulate two independent variables.

  • Robot mode: {Task-Oriented, Supportive} robots as described in Section IV.

  • Scenario: {Easy, Hard} block configurations. (Fig. 4)

The block configuration in the easy and hard scenarios are designed to cause different levels of goal conflict. While both of them include six blocks, the robot’s blocks in the hard scenario were arranged to be directly in front of the human’s. We expect this would increase task difficulty by causing more interference since both agents need to reach into the same space.

Vii-B Participant Allocation

We recruited subjects aged (, , male, female) for a within-subject study. To reduce order effects, we counterbalanced the order of the robot mode. We kept the scenario order the same, where hard always followed easy. The participants were not informed about the kind of robot they would be interacting with or how many types there were.

Vii-C Procedure

The experiment took place in a university lab under experimenter supervision. We seated participants in front of the robot as depicted in Fig.5. After reading the explanatory statement and signing a consent form, the experimenter explained the task by reading from a script.

The participants were assigned yellow blocks and their goal was to move these blocks to their destinations accurately while minimizing task time. The start of a turn was signaled on the scanning display in Fig. 5 and both agents performed reaching actions simultaneously, continuing until all their blocks were in their respective destinations. This concluded one trial and each participant performed four. Participants were also given three types of surveys, a demographic one at the start of the experiment, one after every trial, and one at the end to record their overall experience. A complete experiment took between and minutes.

Vii-D Dependent Variables

We record both objective and subjective metrics.

Objective measures. We study the effect of supportive actions on task completion time for each agent, the total number of safety stops, as well as human’s idle time ratio. The task completion time is the time an agent takes to complete a trial and is easily measured for the robot since we programmatically record the time when the robot starts and finishes an action. For the human, we manually annotate this using a video recording of the experiments. We also annotate the time the human waits for the robot after completing an action and compute the ratio of the accumulated wait time over a trial to their total execution time as the human-idle ratio. We also count the times the robot has to stop due to proximity to the human as safety stops.

Subjective measures. Participants answered ten -point questions after each trial. Five of these are collected in a Likert-scale that measures robot proficiency as a coworker and includes statements about the robot’s helpfulness, action-selection, intention-prediction, disruption, etc. . The rest of the questions are treated as individual differential scale items. We adapt this survey from collaborative HRI studies like [9]. The Likert-scale (Cronbach’s ) is listed in Tab. II and the individual items are listed in Tab. III.

Robot coworker proficiency ()
I believe the robot accurately perceived my goals.
The robot was helpful and/or cooperative.
The robot seemed to select the correct object to pick up
most of the time.
The robot disrupted me in efficiently performing the task. (R)
I felt uncomfortable with the robot. (R)
TABLE II: Likert-scale composed of individual survey items with Cronbach’s . (R) indicates a reverse scale.
Individual Measures
I1 How successful were you in achieving your task?
I2 How hard did you have to work to accomplish your
level of performance? (R)
I3 How much attention did you pay to the robot and
its performance during the task?
I4 I felt unsafe with the robot.(R)
I5 How would you grade the robot as a coworker, overall?
TABLE III: Individual scale items from survey.

Viii Implementation Details

Our user study setup is depicted in Fig. 5 and includes the robot and the human around a table with a checkerboard grid on which we place the blocks. We mount an RGB-D sensor overhead to detect the blocks and the person’s arm. These detections guide the robot’s action-selection and trajectory planning, which are implemented on the Universal Robot 5 (UR5) using the Robot Operating System (ROS) [16]. We also include a scanning area that instructs the participant about the destinations for their blocks. Our experiment is fully-autonomous and does not require human intervention.

Fig. 5: The experimental setup

Viii-a Sensing

The location of the grid is calibrated in the camera frame ahead of time using OpenCV [1] and we apply a simple color blob detection technique to the RGB image in real-time to localize the blocks.

We instructed participants to wear a colored glove covering their arm to allow for its easy detection. We ensure safety by stopping the robot arm if the user’s hand comes within a fixed distance threshold.

Viii-B Robot Control

We implement both task-oriented and supportive robot policies for action-selection. For a given goal grid location, we generate waypoints for the robot end-effector to it at a fixed vertical offset from the grid and use the MoveIt framework [19] to generate a Cartesian path. This path is followed by the robot controller after which it attempts a vertical move down to either grab or drop the block and then moves back up. Robot joint speed is limited to () to ensure user safety and comfort.

We also included a camera station where participants scanned blocks and were informed of their destinations after a short delay. We use this delay to account for the human’s higher relative speed to synchronize human-robot actions.

Ix user study Results

We compare the independent variables through the objective task performance metrics first and then by participant responses to the survey. We had to remove the data for two participants, one due to a robot failure, and the other because the participant did not follow experimental directions. Thus, in total we analyze ( trials.

Ix-a Objective Measures

(a) Safety Stops
(b) Human Task Time
(c) Human Idle Time
Fig. 6: Objective Measures. Box-and-whisker plots of the (a) number of safety stops; (b) time taken by the human to complete the task; and (c) the proportion of idle time spent by the human. Note, T-O refers to the task-oriented robot.

We analyze some of the objective metrics in Fig. 6.

Safety Stops.

We count the times when the robot has to stop due to its proximity to the human’s arm. We compare robot types through a Wilcoxon signed-rank test on each scenario because the data was not normally distributed. We find a significant effect due to the

supportive robot in the hard scenario (). Fig. 5(a) shows that the supportive robot had fewer stops in hard affirming H2.

Robot Task Time. We use a repeated-measure two-way ANOVA to compare the robot’s task completion time. We find a significant effect due to the supportive robot () and no interaction. Table IV shows that the addition of supportive actions led to a longer robot task time.

Robot Robot Task Time (s)
Baseline
Supportive
TABLE IV: Task completion time of the robot.

Human Task Time. We use a Wilcoxon signed-rank test to compare the human’s task completion time due to the non-normality of this data. We find no significant effect due to supportive actions for either scenario. Fig. 5(b) shows the human interacting with the supportive robot is faster but with high variance, partly denying H3.

Human-Idle Time. We use a repeated-measures two-way ANOVA to analyze the human’s idle time ratio as a measure of task fluency. We compute this ratio by accumulating the time the human waited for the robot to complete an action before they could start the next one and dividing it by the human’s task time. We find significant effects due to both robot () and scenario () types. Fig. 5(c) shows that supportive robot and hard scenario each led to higher idle time partially affirming H3. This measure was adapted from [9] where it was found to be correlated with higher human preference.

Supportive actions. The robot took on average fewer supportive actions in the easy () scenario than the the hard () due to fewer goal conflicts. The participants took only supportive actions overall and all of them took place in the supportive robot condition.

Summary. The supportive robot confirms H1 in the hard scenario by reducing interference; it partly confirms H3 since human’s idle time is increased, however, the human’s task completion time is not significantly reduced. Also, the supportive robot takes longer to complete this task.

Ix-B Subjective Measures

(a) Likert Scale
(b) Individual Measures
(c) Safety Perception
Fig. 7: Subjective Measures. Box-and-whisker plots of the (a) normalized survey response to Likert-scale items for the different robot type separated by scenario; (b) response to measures of subjective task difficulty and attention to the robot for the two scenarios; and (c) safety perception for the robot types. Note that the leftmost box in (b) and rightmost box in (c) have no height and so appear as a line at .

We analyze some of the survey responses in Fig. 7.

Robot coworker proficiency. We perform a two-way repeated-measure ANOVA on the Likert-scale from Table II and find significant interaction (, ). The normalized responses in Fig. 6(a) show that participants prefer the supportive robot in the hard scenario but have no preference in the easy one affirming H1 for it. They also show that people prefer supportive robot more when the task difficulty increases but preference for the task-oriented robot remains similar regardless of task difficulty.

Scenario Effect. We use a Wilcoxon signed-rank test to compare individual scale responses from Table III. We find significant scenario effect for both I2 (, ) and I3 (, ). Fig. 6(b) indicates that participants find the hard scenario more difficult to perform, affirming H4. It also leads to the observation that people are more observant of the robot’s actions in the hard scenario.

Safety Perception. We used a Wilcoxon signed-rank test to compare the I4 scale item and do not find any significant effect due to supportive actions. Fig. 6(c) shows that participants felt very safe for both robot types in our experiment.

Summary. We find that participants prefer the supportive robot as their coworker in the hard scenario affirming H3; also, participants find the hard scenario more difficult and pay more attention to the robot in it, supporting H4.

X Discussion

One might think that moving the human’s blocks close to them would cause people to perceive the robot as helpful and inflate supportive robot’s proficiency. However, our results, which show that the supportive robot is only preferred in the hard scenario, provide evidence for the human’s preference relying on the suitability of the robot’s action-selection to the task.

Our results show that supportive actions do not reduce safety stops in the easy scenario. Safety stops are overlaps in agent trajectories and can be caused by an unavoidable conflict between agent goals, uncertainty about each other’s goal, sensor error, etc. We label a configuration as hard due to the presence of more goal conflicts; this label does not allude to other sources of overlap. Supportive actions in our work were designed to reduce goal conflicts, they lead to fewer stops on the hard task, but will need to be adapted for other sources of conflict to be effective in other scenarios.

Hoffman [9] found that collaborative fluency does not track task-efficiency in team tasks. Ours is not a team task, however, our results also show coworker acceptance to be separate from either agent’s task-efficiency. We find supportive actions to increase coworker acceptance but reduce robot efficiency. They present a trade-off that needs to be considered for designing robot behaviors. For e.g., if a robot is introduced into a manual process to reduce repetitive tasks for humans and increase their job satisfaction, then its acceptance might play a more important role than its efficiency. Our methodology helps highlight this trade-off by combining the subjective and objective impact of supportive robot behaviors and is applicable to other shared-workspace human-robot environments. We consider this methodology as one of the contributions of our work.

Xi Conclusion and Future Work

We introduce interaction-supporting actions and design robot behavior that selects between these and task-oriented actions by considering the human’s and its own goals. We implement it on an autonomous robot and evaluate it in a shared-workspace user study. The results show that this robot increases human coworker preference in a scenario with more goal conflicts but decreases efficiency as compared to a robot that only takes task-oriented actions.

Our study illustrates taking actions to support interaction while trading off on efficiency in an assembly task. Although, the rationale from Sec. IV can help guide adaptation to new domains, however, the actions are applicable only to similar scenarios. In future work, we plan to develop a framework for supportive behavior that can perform this reasoning based on task-specific cost functions.

Participants took very few supportive actions towards the robot. We believe their unfamiliarity with the task caused uncertainty about allowed actions. An interesting extension would be to apply this to an actual manufacturing task with subjects who are familiar with it to test the generalizability of our findings. We can also improve task naturalness by increasing robot speed by employing better sensors and models for human motion prediction.

References

  • [1] G. Bradski and A. Kaehler (2008)

    Learning opencv: computer vision with the opencv library

    .
    Cited by: §VIII-A.
  • [2] A. Cherubini, R. Passama, A. Crosnier, A. Lasnier, and P. Fraisse (2016) Collaborative manufacturing with physical human–robot interaction. Robotics and Computer-Integrated Manufacturing 40, pp. 1–13. Cited by: §II.
  • [3] R. Diankov (2010-08) Automated construction of robotic manipulation programs. Ph.D. Thesis, Carnegie Mellon University, Robotics Institute. Cited by: §V.
  • [4] E. Fehr and K. M. Schmidt (2001) Theories of fairness and reciprocity-evidence and economic applications. Cited by: §I.
  • [5] V. Gabler, T. Stahl, G. Huber, O. Oguz, and D. Wollherr (2017) A game-theoretic approach for adaptive action selection in close proximity human-robot-collaboration. In 2017 IEEE International Conference on Robotics and Automation (ICRA), Cited by: §I, §II.
  • [6] M. C. Gombolay, R. A. Gutierrez, S. G. Clarke, G. F. Sturla, and J. A. Shah (2015) Decision-making authority, team efficiency and human worker satisfaction in mixed human–robot teams. Autonomous Robots 39 (3), pp. 293–312. Cited by: §II.
  • [7] K. P. Hawkins, S. Bansal, N. N. Vo, and A. F. Bobick (2014) Anticipating human actions for collaboration in the presence of task and sensor uncertainty. In 2014 ieee international conference on Robotics and automation (ICRA), Cited by: §II, §II.
  • [8] G. Hoffman (2007) Ensemble: fluency and embodiment for robots acting with humans. Ph.D. Thesis, Massachusetts Institute of Technology. Cited by: §I.
  • [9] G. Hoffman (2019) Evaluating fluency in human–robot collaboration. IEEE Transactions on Human-Machine Systems 49 (3), pp. 209–218. Cited by: §X, §VII-D, §IX-A.
  • [10] International Federation of Robotics (IFR) (2019) IFR press release. Note: https://ifr.org/ifr-press-releases/news/robot-investment-reaches-record-16.5-billion-usd, Last accessed on 2019-10-01 Cited by: §I.
  • [11] H. S. Koppula and A. Saxena (2015) Anticipating human activities using object affordances for reactive robotic response. IEEE transactions on pattern analysis and machine intelligence 38 (1), pp. 14–29. Cited by: §II.
  • [12] S. M. LaValle (1998) Rapidly-exploring random trees: a new tool for path planning. Cited by: §V.
  • [13] S. Li and J. A. Shah (2019) Safe and efficient high dimensional motion planning in space-time with time parameterized prediction. In 2019 International Conference on Robotics and Automation (ICRA), Cited by: §II.
  • [14] J. Mainprice, R. Hayne, and D. Berenson (2016) Goal set inverse optimal control and iterative replanning for predicting human reaching motions in shared workspaces. IEEE Transactions on Robotics 32 (4), pp. 897–908. Cited by: §I, §II.
  • [15] S. Nikolaidis, R. Ramakrishnan, K. Gu, and J. Shah (2015) Efficient model learning from joint-action demonstrations for human-robot collaborative tasks. In ACM/IEEE international conference on human-robot interaction, Cited by: §II.
  • [16] M. Quigley, K. Conley, B. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler, and A. Ng (2009)

    ROS: an open-source robot operating system

    .
    Cited by: §VIII.
  • [17] R. J. Stout, J. A. Cannon-Bowers, E. Salas, and D. M. Milanovich (1999) Planning, shared mental models, and coordinated performance: an empirical link is established. Human Factors 41 (1), pp. 61–71. Cited by: §I.
  • [18] I. A. Şucan, M. Moll, and L. E. Kavraki (2012) The Open Motion Planning Library. IEEE Robotics & Automation Magazine. External Links: Document Cited by: §V.
  • [19] I. Sucan and S. Chitta (2019) MoveIt motion planning framework. External Links: Link Cited by: §VIII-B.
  • [20] V. V. Unhelkar, H. C. Siu, and J. A. Shah (2014) Comparative performance of human and mobile robotic assistants in collaborative fetch-and-deliver tasks. In ACM/IEEE International Conference on Human-Robot Interaction (HRI), Cited by: §I.
  • [21] K. S. Welfare, M. R. Hallowell, J. A. Shah, and L. D. Riek (2019) Consider the human work experience when integrating robotics in the workplace. In 2019 ACM/IEEE International Conference on Human-Robot Interaction (HRI), Cited by: §I.