Towards Effective Human-AI Teams: The Case of Human-Robot Packing

09/14/2019 ∙ by Gilwoo Lee, et al. ∙ 0

We focus on the problem of designing an artificial agent capable of assisting a human user to complete a task. Our goal is to guide human users towards optimal task performance while keeping their cognitive load as low as possible. Our insight is that in order to do so, we should develop an understanding of human decision for the task domain. In this work, we consider the domain of collaborative packing, and as a first step, we explore the mechanisms underlying human packing strategies. We conduct a user study in which human participants complete a series of packing tasks in a virtual environment. We analyze their packing strategies and discover that they exhibit specific spatial and temporal patterns (e.g., humans tend to place larger items into corners first). Our insight is that imbuing an artificial agent with an understanding of this spatiotemporal structure will enable improved assistance, which will be reflected in the task performance and human perception of the AI agent. Ongoing work involves the development of a framework that incorporates the extracted insights to predict and manipulate human decision making towards an efficient route of low cognitive load. A follow-up study will evaluate our framework against a set of baselines featuring distinct strategies of assistance. Our eventual goal is the deployment and evaluation of our framework on an autonomous robotic manipulator, actively assisting users on a packing task.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Introduction

We consider the general scenario in which a human and an artificial agent are collaborating to jointly complete a task. Depending on the domain, it is often true that the capabilities of the human and the robot may greatly differ. The AI agent may often possess superior abilities in terms reasoning, whereas the human agent may have superior perceptual and control abilities. An emerging area of research looks at the development of frameworks that would enable effective combined performance by leveraging the strengths of both parties, while ensuring human comfort. Our insight is that to achieve this goal, the artificial agent needs to reason about the decision making of its human counterpart, and even intervene to guide their behavior towards efficiency. In many cases, this intervention may not be realized physically due to hardware or software limitations. In such cases, it is critical that the AI conveys its intentions implicitly through its actions.

In this work, we consider a scenario of collaborative packing, in which a human places a set of objects in a container, under the assistance of a recommender system. This application is of particular relevance these days, given the increasing presence of AI systems in logistics. Achieving adequate spatial efficiency in packing is an anecdotally hard problem for non-expert humans, whereas manipulation is still a big challenge for robots. On the other hand, AI systems feature superior planning capabilities, whereas humans are equipped with unparalleled manipulation capabilities. An efficiently-combined collaborative effort, leveraging the strengths of both could help achieve increased overall performance.

As a first step to approach the outlined vision, we seek to understand the domain of packing, focusing on the strategies that humans employ when tasked with completing a packing task. To do so, we conduct an online user study in which we ask human subjects to complete a series of packing tasks in a virtual environment. Each task involves the placement of a different set of 2-dimensional objects inside a packing container. Our findings suggest that human packing strategies in this domain can largely be classified into a set of distinct categories corresponding to different spatiotemporal patterns of placement. We analyze our findings and discuss the development of a planning-under-uncertainty framework targeted towards ensuring improved efficiency and low cognitive load for humans in collaborative packing scenarios.

Related work

The concept of human-AI teaming is gaining popularity, as combining the strengths of humans and AI systems opens promising avenues for a variety of fields and applications [15, 20, 3, 16]. Naturally, the problem of enabling seamless, natural, and efficient collaboration in human-AI teams has received considerable attention over the recent years, with researchers focusing on different aspects of the interaction, such as the powerful communicative impact of actions performed in a shared context [21] or the tradeoffs between performance gains and compatibility with existing human mental models [1].

For a series of applications, transferring the benefits of human-AI teaming in the physical environment implies embodiment in robot platforms. Human-robot teaming has a unique potential for a variety of applications, given that robots can be both intelligent and physically capable. Thus the combination of their capabilities with those of humans may result in performance standards that neither party could otherwise achieve in isolation [12]. Examples include fast task completion in sequential manipulation tasks [11], improved performance through intelligent resource allocation to human participants [14] etc.

A common complication in such applications is that explicit communication between the human and the robot is often not feasible, effective or desired. Therefore, in order to be of assistance, the robot needs to infer the intentions of the user implicitly, through observation of their actions, and clearly communicate its own intentions, through its own actions [17]. A typical paradigm of particular relevance in this domain is shared autonomy, in which a robot assists a human user in completing their task. In a variety of applications, it has been shown that inferring and adapting to human intentions is positively perceived by users and effective [5, 19, 9, 13]. Furthermore, understanding the mechanisms underlying human decision making in a particular domain is shown to yield performance improvements and positive impressions in such joint tasks [25, 24]. Finally, explicitly collaborative tasks such as collaborative manipulation [6, 23], and assembly [18] or implicitly collaborative tasks such as social navigation [22] benefit significantly by the incorporation of models of human inference.

In this work, we consider a collaborative task (packing), performed in collaboration between a human and an AI agent. We also consider a setting of implicit communication, in the sense that human intentions are not directly observed and need to be inferred. Our first step towards approaching this scenario is to understand the domain by collecting and analyzing human data.

Study design

Our study was conducted online on an interactive web application. Participants were recruited online, through the Amazon Mechanical Turk platform [4]. Each participant was assigned the same set of 65 packing tasks, presented in a random order. These tasks involved the placement of sets of 4-8 rectangular objects of different sizes inside a rectangular container of fixed size.

Interface

The web interface depicts a set of rectangular objects of various dimensions, alongside a rectangular container, from a top view (see Fig. 1). Participants were instructed to sequentially place all of these objects at locations of their choice, inside the container. Once an object is placed inside the container, it cannot be moved – thus participants are forced to judiciously decide on the placements of their objects. The interface comprises three buttons (see Fig. 1: (a) a “Proceed” button for proceeding to the next task; (b) a “Reset” button in cases where participants’ decisions did not allow them to put all object to the container; (c) a “Skip” button which allowed participants to proceed to the next task without completing the current one. Since at this stage we were interested in understanding the domain of packing rather than participants’ performance, we gave users the option of resetting a task to its initial state by hitting the ”Reset” button. We also gave participants the option to skip a task if they decided to but we disincentivized this option by placing a sad smiley face on the corresponding button. Similarly, we incentivized completion by placing a happy face on the ”Proceed” button.

Figure 1: Study interface. The user sequentially drags and drops the objects shown on the right side into the container depicted in the right side. When done, the user may proceed to the next task by clicking the left button. The user is also given the option of resetting the task to its initial state by hitting the Reset button or proceeding without completing the task (right button).
(a) Example packing instance
(b) Spatial clusters
(c) Temporal patterns of first 2 items
Figure 2: Highlights from our study. We found a few strong spatio-temporal patterns in people’s packing styles, such as placing large items into corners or packing larger items before smaller ones. (a) An example packing instance by a user. This person packed in the order of A-C-B-D-E. (b) Spatial clusters of configurations for the items in (a). Because people tend to put larger items into corners, the final configurations can be clustered into a few spatial patterns. For this task, 4 strong spatial patterns are shown. (c) Temporal patterns of the first two items for this task. Nearly everyone chose the largest item (A) first, and 85% of them picked one of the two second largest items (B or C).

Generation of Packing Tasks

While we can generate arbitrarily easy or complex packing tasks by preparing a large container and many small items that could be packed in many different ways, such problem instances would not help us observe any discernible patterns that humans may naturally have. In order to identify spatio-temporal patterns in packing, we have designed our packing tasks such that each task satisfy the following conditions:

  • At least 70% of the container is filled with the items.

  • There are finite clusters of spatially feasible solutions.

By committing to these conditions, we limit the space of feasible solutions. Our expectation is that even among the finite spatially feasible clusters, people’s packing styles may fall into only a subset of the clusters, or that they may have strong spatio-temporal patterns that become only apparent with this confined setting.

In order to generate problems that satisfy these two conditions, we fix the size of the container and randomly generate items of various sizes and place them with multiple attempts. If more than 70% of the container is filled with 4-8 items, then we test the second condition. We empty the container and randomly place the same items for multiple trials. If we can generate more than 50 different collision-free configurations, then we run a PCA analysis on the configurations. Each configuration corresponds to a vector where dimensions correspond to the coordinates of the th item. We take the first two dimensions of the PCA projection, visually check for discernible clusters as in Fig. 1(b), and keep the task only if such clusters are found. In total, we have generated 65 tasks.

Dataset & Analysis

In total, we had 100 participants through the Amazon Mechanical Turk platform. The participants were between 18 and 65 years old. Each participant was given the 65 tasks in a randomly generated order and was asked to solve them within an hour. On average, the participants took 40 minutes to solve the tasks. For each task, we have recorded the ordering and placements of the items into the box.

The collected data is grouped per task. For each task, we spatially cluster them in the same way we generated them, using PCA analysis. For temporal patterns, we take the first two items that were placed and compare the frequency of each ordered pair.

Discussion

The findings of this study illustrate our extracted knowledge about the particular domain in consideration, i.e., that of 2-dimensional packing. We discovered that human packing strategies in this domain tend to follow specific spatiotemporal patterns. Fig. 1(b) illustrates four distinct spatial clusters of object placements that emerged within the packing task shown in Fig. 1(a). For the same task, Fig. 1(c) describes the frequency of different temporal patterns that emerged. Qualitative examination of the discovered patterns revealed interesting trends, such as the placement of larger objects in the beginning of the task, and the placement of larger objects on the corners (see Fig. 2). Despite the existence of such trends, subjects exhibited a variety of different strategies. Identifying and adapting to observed packing strategies online could enable an artificial agent to assist a human agent in an effective fashion.

Ongoing & Planned Work

Our key insight is that understanding the mechanisms underlying human decision making could enable an artificial agent to provide effective assistance, yielding improved task performance and reduced cognitive load for human users. Some domains can be particularly challenging for humans, for reasons related to the limits of human computational abilities. For example, in the packing domain, the limited human planning horizon and human spatial efficiency can greatly affect task performance and mentally load humans to an undesired extent. In fact, packing can be cast as the knapsack optimization problem, which is known to be NP-hard [8]. We expect that an AI agent that understands both the knapsack problem and the mechanisms underlying human decision making could result in effective assistance yielding improved task performance and reduced cognitive load for human users. Ongoing work involves the development of a planning framework that would allow us to test this insight through a follow-up user study.

A Framework for Packing Assistance

We are currently incorporating the findings of the presented study to develop a framework for planning under uncertainty in collaborative packing tasks. In particular, we are working on adapting the Bayesian Reinforcement Learning (BRL) framework of lee2018bayesian lee2018bayesian to reason about uncertainty over human packing strategies. BRL is a reinforcement learning framework that incorporates a mechanism for reasoning about model uncertainty. It models the problem as a Bayes-Adaptive Markov Decision Process (BAMDP)

[7], explicitly modeling uncertainty as a belief over a latent uncertainty variable, incorporated in the transition function and the reward function. Overall, BRL maximizes the expected discounted reward, given the uncertainty. We believe that this mechanism is of particular relevance and value in problems involving human interaction, where uncertainty is typically over human mental models underlying their decision making.

For our task domain, we are incorporating a belief distribution over the human user’s object placement, given the container configuration and the object’s shape. We plan on using the collected human dataset to learn the outlined predictive model. During execution, we will be using our framework as a recommender system that will be providing online recommendations to the human user.

Planned User Study

To formally investigate our outlined insight, we design an online user study, in which human subjects are exposed to a set of conditions (within-subjects), corresponding to different modes of AI assistance. More specifically, we consider the following set of conditions:

  1. No recommendation – the user completes the task without receiving any assistance.

  2. The system provides object recommendations, i.e., assists by manipulating the order of object placements.

  3. The system provides both order and placement recommendations.

  4. The system provides random object recommendations.

  5. The system provides random order and random placement recommendations.

We hypothesize that the assistive conditions will yield improved task performance compared to the condition of no assistance, but also more positive human ratings and reduced reported cognitive load. As performance metrics, we consider the time-to-completion and increased spatial efficiency. After each condition, we will collect ratings of perceived system intelligence, likeability, and predictability, based on the Godspeed [2] to understand the perception of the considered conditions from the perspective of participants. Finally, we will measure the cognitive load associated with each condition by presenting a questionnaire based on the NASA-TLX [10]. Finally, participants will be provided with an open-form question, asking them to share qualitative feedback of their choice regarding their interaction with the system.

Acknowledgments

Gilwoo Lee is partially supported by Kwanjeong Educational Foundation. This work was partially funded by the Honda Research Institute USA, the National Science Foundation NRI (award IIS-1748582), and Robotics Collaborative Technology Alliance (RCTA) of the United States Army Laboratory.

References

  • [1] G. Bansal, B. Nushi, E. Kamar, D. Weld, W. Lasecki, and E. Horvitz (2019) Updates in human-ai teams: understanding and addressing the performance/compatibility tradeoff. In

    Proceedings of the AAAI Conference on Artificial Intelligence

    ,
    Cited by: Related work.
  • [2] C. Bartneck, E. Croft, and D. Kulic (2009) Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. International Journal of Social Robotics 1 (1), pp. 71–81. Cited by: Planned User Study.
  • [3] M. Bayati, M. Braverman, M. Gillam, K. M. Mack, G. Ruiz, M. S. Smith, and E. Horvitz (2014-10) Data-driven decisions for reducing readmissions for heart failure: general methodology and case study. PLOS ONE 9 (10), pp. 1–9. Cited by: Related work.
  • [4] M. Buhrmester, T. Kwang, and S. D. Gosling (2011) Amazon’s mechanical turk: a new source of inexpensive, yet high-quality, data?. Perspectives on Psychological Science 6 (1), pp. 3–5. Cited by: Study design.
  • [5] A. D. Dragan and S. S. Srinivasa (2013) A policy-blending formalism for shared control. The International Journal of Robotics Research 32 (7), pp. 790–805. Cited by: Related work.
  • [6] A. D. Dragan and S. Srinivasa (2014) Integrating human observer inferences into robot motion planning. Autonomous Robots 37 (4), pp. 351–368. Cited by: Related work.
  • [7] M. O. Duff (2002) Optimal learning: computational procedures for bayes-adaptive markov decision processes. Ph.D. Thesis, University of Massachusetts Amherst. Cited by: A Framework for Packing Assistance.
  • [8] M. R. Garey and D. S. Johnson (2002) Computers and intractability. Vol. 29, wh freeman New York. Cited by: Ongoing & Planned Work.
  • [9] D. Gopinath, S. Jain, and B. D. Argall (2017) Human-in-the-loop optimization of shared autonomy in assistive robotics. IEEE Robotics and Automation Letters 2 (1), pp. 247–254. Cited by: Related work.
  • [10] S. G. Hart and L. E. Staveland (1988) Development of nasa-tlx (task load index): results of empirical and theoretical research. In Human Mental Workload, P. A. Hancock and N. Meshkati (Eds.), Advances in Psychology, Vol. 52, pp. 139 – 183. Cited by: Planned User Study.
  • [11] B. Hayes and B. Scassellati (2015) Effective robot teammate behaviors for supporting sequential manipulation tasks. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 6374–6380. Cited by: Related work.
  • [12] G. Hoffman and C. Breazeal (2004) Collaboration in human-robot teams. In Proceedings of the AIAA Intelligent Systems Technical Conference, pp. 6434. Cited by: Related work.
  • [13] S. Javdani, H. Admoni, S. Pellegrinelli, S. S. Srinivasa, and J. A. Bagnell (2018) Shared autonomy via hindsight optimization for teleoperation and teaming. The International Journal of Robotics Research 37 (7), pp. 717–742. Cited by: Related work.
  • [14] M. F. Jung, D. DiFranzo, B. Stoll, S. Shen, A. Lawrence, and H. Claure (2018) Robot assisted tower construction-a resource distribution task to study human-robot collaboration and interaction with groups of people. arXiv preprint arXiv:1812.09548. Cited by: Related work.
  • [15] E. Kamar, S. Hacker, and E. Horvitz (2012) Combining human and machine intelligence in large-scale crowdsourcing. In Proceedings of the International Conference on Autonomous Agents and Multiagent Systems (AAMAS), pp. 467–474. Cited by: Related work.
  • [16] E. Kamar (2016) Directions in hybrid intelligence: complementing ai systems with human intelligence. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), pp. 4070–4073. Cited by: Related work.
  • [17] R. A. Knepper, C. I. Mavrogiannis, J. Proft, and C. Liang (2017) Implicit communication in a joint action. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 283–292. Cited by: Related work.
  • [18] R. A. Knepper, S. Tellex, A. Li, N. Roy, and D. Rus (2015) Recovering from failure by asking for help. Autonomous Robots 39 (3), pp. 347–362. Cited by: Related work.
  • [19] M. Kuderer, C. Sprunk, H. Kretzschmar, and W. Burgard (2014) Online generation of homotopically distinct navigation paths. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), pp. 6462–6467. Cited by: Related work.
  • [20] W. S. Lasecki, J. P. Bigham, J. F. Allen, and G. Ferguson (2012) Real-time collaborative planning with the crowd. In Proceedings of the AAAI Conference on Artificial Intelligence, pp. 2435–2436. Cited by: Related work.
  • [21] C. Liang, J. Proft, E. Andersen, and R. A. Knepper (2019) Implicit communication of actionable information in human-ai teams. In Proceedings of the CHI Conference on Human Factors in Computing Systems, pp. 95:1–95:13. Cited by: Related work.
  • [22] C. Mavrogiannis, A. M. Hutchinson, J. Macdonald, P. Alves-Oliveira, and R. A. Knepper (2019) Effects of distinct robot navigation strategies on human behavior in a crowded environment. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 421–430. External Links: ISSN 2167-2148 Cited by: Related work.
  • [23] S. Nikolaidis, D. Hsu, and S. Srinivasa (2017) Human-robot mutual adaptation in collaborative tasks: models and experiments. The International Journal of Robotics Research 36 (5-7), pp. 618–634. Cited by: Related work.
  • [24] S. Nikolaidis, R. Ramakrishnan, K. Gu, and J. Shah (2015) Efficient model learning from joint-action demonstrations for human-robot collaborative tasks. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 189–196. Cited by: Related work.
  • [25] S. Nikolaidis and J. Shah (2013) Human-robot cross-training: computational formulation, modeling and evaluation of a human team training strategy. In Proceedings of the ACM/IEEE International Conference on Human-robot Interaction (HRI), pp. 33–40. Cited by: Related work.