Object Handovers: a Review for Robotics

07/25/2020 ∙ by Valerio Ortenzi, et al. ∙ 0

This article surveys the literature on human-robot object handovers. A handover is a collaborative joint action where an agent, the giver, gives an object to another agent, the receiver. The physical exchange starts when the receiver first contacts the object held by the giver and ends when the giver fully releases the object to the receiver. However, important cognitive and physical processes begin before the physical exchange, including initiating implicit agreement with respect to the location and timing of the exchange. From this perspective, we structure our review into the two main phases delimited by the aforementioned events: 1) a pre-handover phase, and 2) the physical exchange. We focus our analysis on the two actors (giver and receiver) and report the state of the art of robotic givers (robot-to-human handovers) and the robotic receivers (human-to-robot handovers). We report a comprehensive list of qualitative and quantitative metrics commonly used to assess the interaction. While focusing our review on the cognitive level (e.g., prediction, perception, motion planning, learning) and the physical level (e.g., motion, grasping, grip release) of the handover, we briefly discuss also the concepts of safety, social context, and ergonomics. We compare the behaviours displayed during human-to-human handovers to the state of the art of robotic assistants, and identify the major areas of improvement for robotic assistants to reach performance comparable to human interactions. Finally, we propose a minimal set of metrics that should be used in order to enable a fair comparison among the approaches.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 5

page 11

page 13

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Recent years have witnessed the industry progressing towards a more direct collaboration between humans and robots. The current trend of Industry 4.0 envisions completely shared environments, where robots act on and interact with their surroundings and other agents such as human workers and robots [174, 25], enabled by technological advances in robot hardware [4]. The recent COVID-19 pandemic has also increased the demand for autonomous and collaborative robotics in environments such as care homes and hospitals [215, 231]. Accordingly, Human Robot Interaction (HRI) is featured prominently in the robotics roadmaps of Europe, Australia, Japan and the US [213, 14, 219, 55]. The advantages of human-robot teams are multifaceted and include the better deployment of workers to focus on high manipulation and cognitive skill tasks, while transferring repetitive, low skill, and ergonomically unfavourable tasks to robot assistants. Effective deployment of robotic assistants can improve both the work quality and the experience of human workers.

The structured nature of traditional industrial settings has eased the use of robots in work cells. However, a similarly successful presence of robots is yet to occur in unstructured environments (i.e., in factories without work cells, in households, in hospitals). For such places, robots need a better understanding of the tasks to perform, a robust perception system to detect and track changes in the surrounding dynamic environment and smart, adaptive action and motion planning that accounts for the changes in the environment [178].

Fig. 1: Example of a direct handover where a robot partner passes a bottle of mustard to a human. As both hands are in contact with the object, this picture shows the physical exchange phase of a handover. We review the literature on the cognitive and physical aspects of object handovers.

Human-robot collaboration and human-robot interaction are frequent keywords in our research community. We refer the reader to [217, 4] for reviews on physical collaboration and to [78, 80] for an overview of the cognitive aspects. Our community has seen an increasing focus on collaborative manipulation tasks [233, 131, 153]. In this context, robots must be capable of exchanging objects for successful cooperation and collaboration in manipulation tasks, as in Fig. 1. For example, consider an assembly task where a human operator has to assemble a complex piece of furniture and requires a tool. The robot assistant should be able to fetch and pass the tool to the human operator. Or consider a service robot handing out flyers to passersby [205] or serving drinks [28]. A further example can be a mechanic asking for a tool while under a car: in this scenario, the motion range of the mechanic is extremely limited and extra care is needed to pass the tool [112]. The action of passing objects is usually referred to as an object handover. More formally, an object handover is defined as the joint action of a giver transferring an object to a receiver. Despite being a frequent collaborative action among humans, a handover is a concerted effort of prediction, perception, action, learning, and adjustment by both agents. The implementation of a human-robot handover that is as efficient and fluent as the exchanges among humans is an open challenge for our community. In this paper, we review the state of the art of robotic object handovers. In particular, we investigate the aspects of the handover interaction that in our opinion require the most effort to enable a more useful and successful functioning of robots, especially in unstructured environments.

We start this paper with a review of the main findings about human-human handovers in Section II. Then, in the following two sections, we refer to each of the two phases of a handover: pre-handover, and physical handover. In Section III we focus on the reasoning and actions of the giver and receiver before the physical exchange of the object, analysing aspects such as communication, grasping, and motion planning and control. Section IV describes the physical exchange of the object, focusing on aspects such as grip modulation and object release. Section V reports a comprehensive list of quantitative and qualitative metrics that are commonly used for assessing handovers. We conclude this review with a discussion identifying directions for future work in Section VI. We further propose a minimal set of metrics to adopt in experimental protocols in order to enable a fair comparison among the different approaches.

Ii Handover as a joint action

A handover is a joint action between a giver and a receiver. Joint actions are formally defined as [203]

any form of social interaction whereby two or more individuals coordinate their actions in space and time to bring about a change in the environment …successful joint action depends on the abilities (i) to share representations, (ii) to predict actions, and (iii) to integrate predicted effects of own and others’ actions.

Joint actions are much more complicated than individual actions. Social context is shown to modify the plans of actions of an agent [19]. While there is still much to understand and learn about how humans coordinate to meet their final goals, a number of scientific results shed some light on how humans behave during such actions. A minimal architecture for a joint action should include representations, processes like monitoring (feedback) and prediction (feedforward), and coordination [222]. Humans tend to form representations of their own goals and tasks, and potentially also of their partners’ goals and tasks. Then, two processes use those representations: monitoring and prediction. Monitoring is a process to check the advancement of those tasks and goals. Such feedback can be on one’s own task, on the task of the other agent [192] and on the overall goal. Predicting the outcome of one’s own actions and possibly, the other agent’s actions, helps the coordination between the agents. Agents are interested in predicting: the what, i.e., the actions of the other and their goal; the when, i.e., the temporal coordination [40]; and the where, i.e., the spatial distribution of common space [204]. Shared representations help to predict the other’s actions and achieve higher coordination, integrating the what, when and where. Coordination is also increased through joint attention (thus sharing perceptual inputs) [203]. In particular, research has shown that there seem to be similar eye motor programs when performing and observing the same scene [76, 194], thus reinforcing the link between perception and action.

More recently, a Dyadic Motor Plan was proposed in [198]. This plan highlights the possibility of joint actions to be based on active prediction of not only the actions of the partner, but also of the effects of the actions of the partner, in a deeper effort of prediction.

To summarise, during joint actions humans tend to plan their motions considering the partner’s needs and representing and predicting the partner’s actions and their outcomes [229, 198]. For this reason, scientists argue that humans form shared representations of the task to better predict each other’s movements and to act accordingly [191]. Efficiency and social cohesion are also listed as reasons to adopt such shared representations.

Coordination is extremely important for the success of a joint action. There are two types of coordination [110]: planned and emergent. Planned coordination emerges from the representations of the desired outcomes and one’s own tasks and goals. Emergent coordination is independent of joint plans, and emerges from perception-action couplings. Entrainment is an example of emergent coordination: two people in rocking chairs involuntarily synchronise [193, 145]. Note that, entrainment requires adaptation by both agents to happen. It was found that such synchronisation does not emerge in the interaction of a human with a non-adaptive robot [133]. Considering these two types of coordination mechanisms, a joint action such as a handover requires the synergetic harmony of planned coordination for the final goal, and emergent coordination for the real-time aspects of the interaction.

From this perspective, an object handover is a joint action where two agents collaborate to accomplish the transition of the object from one agent, referred to as giver, to a second agent, referred to as receiver. While the two agents share the overall goal of the object transfer, the objectives of the two agents differ during the interaction [148]. The giver aims to: most appropriately present the object to the partner; hold the object stably till the completion of the physical handover; and finally, release the object to the receiver as safely as possible. Conversely, the receiver aims to: acquire the object by grasping; stabilise the grasp on the object; and finally, following the handover, perform the task the object was required for. It is crucial to remember that, in most cases, the object is passed in order to have the receiver perform a certain task. This task might be as simple as to place the object on a table (thus imposing loose constraints on the use of the object); or it might be more complicated, such as turning a key in a keyhole or cutting a piece of paper with a pair of scissors. While these tasks are frequently actualised in our everyday life, they require an appropriate utilisation of the object, i.e., they impose severe constraints on the use of the object. In the case of cutting with scissors, the scissors have to be grasped by the rings of the handles, and the blades must be used to perform the cutting. The giver should consider the subsequent task that the receiver would perform with the handed over object, in order to facilitate the task of the receiver [151].

A handover can be divided into two phases [61, 136, 148]. We use the tactile events, control discontinuities, and transitions that characterise any manipulation, to detail each phase [61]. Pre-handover phase includes the explicit and implicit communication between agents, as well as the grasping and transport of the object by the giver. The first contact of the receiver’s hand on the object begins the physical handover. This phase comes to an end when the giver removes their hand from the object and the object is fully in the hold of the receiver. Therefore, we divide a handover into two phases: a pre-handover phase, and the physical handover phase. During these phases, the agents display different levels of activity, with respect to their own tasks and objectives, as shown in Fig. 2.

Two conditions define the start and the end of a handover. A handover can be initiated by the need of an agent to obtain an object to perform a certain task (handover by object request). This agent becomes the receiver and requests the object from the giver. The mechanic under a car asking for a tool is a typical example of this type of initiation. Another example is a cook that asks the sous chef for a kitchen tool. Alternatively, a handover can be initiated by an agent asking another to perform a certain task with an object (handover by task request). This agent becomes the giver and gives the object to the receiver. For example, while tidying up a room, an agent can pass an object to another agent in order for the latter to place the object in a certain location; another example is a chef asking the sous chef to stir some sauce on a pan by offering the appropriate kitchen tool.

Once the exchange is initiated, the giver offers the object to the receiver. The physical exchange of the object can be direct or indirect. The object is passed from the hand of the giver to the hand of the receiver during a direct handover. In a number of situations, as in the example of the mechanic asking for a tool from under a car, a direct handover is also the most immediate solution to pass the requested tool. Alternatively, the object might be placed by the giver on a surface, e.g., on a table, during an indirect handover. Indirect handovers allow a greater flexibility to the receiver in terms of the timing and of the grasp used to obtain the object. However, direct handovers can reduce the effort of the receiver in terms of motions required to obtain the object [54]. In this paper, we focus on direct handovers because almost all the works in this field belong to this kind of exchange type.

Fig. 2: Timeline of an object handover. For each phase of the handover, this figure describes the giver’s and receiver’s activities; and the giver’s and receiver’s tasks. The initiation occurs either by task request by the giver, or by an object request by the receiver. During the pre-handover phase, both agents prepare for the physical handover. After the receiver has taken full control of the object, the task that initiated the object transfer can be performed.

The physical phase of the handover terminates when the receiver has fully obtained the control of the object. At this stage, the receiver progresses to performing the task that initiated the handover.

The next two sections focus on action and cognition during each of the two phases of the exchange: the pre-handover, and the physical handover phases. In particular, we will bring attention to aspects such as motion planning and control, prediction and communication, object grasping and offering, and modulation of grip forces.

Iii Pre-handover phase

As we discussed in the previous section, a handover is initiated either by the request for an object or by the request for a task. In both circumstances, the request for an object or the request for the task must be communicated to the other agent. Communication is a foundation for every joint action, and it can occur in various manners. Humans display a wide array of communication skills to help coordinate the what, when and where of a handover. Gaze and oral cues are two common ways for the agents to communicate during this phase. Communication does not happen only directly, as in for example voicing the intent to pass an object; but also during the action: such as motions or gestures during the pre-handover phase, where a giver clearly displays their intent to hand over the object. Similarly, the way an object is grasped and offered often presents cues on the intent to hand over. In Section III-A we will review the most important findings on communication.

Once initiated, a handover enters the preparation phase that leads to the physical exchange of the object. In preparing to offer the object, the giver predicts how the receiver would perform the task the object is being passed for, and given its predictions, how the receiver would want to grasp the object. Using these predictions, the giver plans motions to obtain (grasp) the object if not yet grasped, or (if needed) to re-grasp the object to best prepare for the exchange, and then to offer it to the receiver. The giver relies on visual and tactile feedback to perceive and track the object as well as the state of the receiver, i.e., both the position and whether they are ready to receive. During this time, communication signals are constantly exchanged between giver and receiver. The giver then uses this sensor feedback to adjust their motion plans, coupling this feedback with updated predictions of the receiver’s behaviour. These updates and adaptations aided by prediction, perception and learning are used to control the motions realised to grasp the object and to offer the object to the receiver. We delve deeper into grasping and motion planning in Sections III-B and III-E, respectively.

The receiver shows lower activity in this phase. However, the receiver’s actions and communication are perceived by the giver, therefore influencing the giver’s actions. Attention and state of preparedness of the receiver are important as they communicate the readiness to receive. Similarly to the giver, the receiver also predicts the behaviour of the giver, and forms a plan of action. The receiver may move their hand towards the predicted handover location in anticipation of the handover. The receiver’s plan and actions are updated using sensor feedback such as vision, touching and hearing. The receiver’s plan and actions are also dependent on the subsequent task that the receiver would perform with the object. At the end of the pre-handover phase, the receiver has reached for and made contact with the object.

Iii-a Communication

Communication is crucial in any joint action. Signalling strategies (i.e., communication) aid coordination by improving the partner’s prediction of one’s actions (thus minimising uncertainty) [184]. In particular, communication is used to initiate the action, i.e., to show the intent to start with the action; and then to coordinate the action once it has started [222]. Humans are extremely skilful in communicating their intent (the what, i.e., the action to perform and the object to pass) and expressing cues about the when and the where of a handover [212]. Communication is so important that a handover can be thought of as a physical process (approach, reach, transfer) and a cognitive process to establish what, when and where to pass [212]. These findings indicate that robots also require such communication skills and adaptation capacity in order to match human performance during interaction with a human partner.

Speech111Interestingly, there is evidence for the embodiment of language, i.e., that the motor system is activated during the comprehension of the language [104]. Moreover, there is further evidence of the involvement of the motor system in processing action words such as “kick”, “pick”. However, it is not clear yet if this activation is due to the real processing of the action words or rather it is a by-product of imagining the action [227]. can be used to express the intent to hand over an object as well as to coordinate the actions during the exchange. As we discussed earlier in this section, speech can be used to initiate the action by either one or both of the agents, and language use could be considered as a form of joint action per se [57]. Thereby, much information can be conveyed to a partner through speech. For example, a robot and a human could have a dialogue to decide their roles during an interaction, and then to coordinate actions [79]. However, the use of speech can also degrade coordination during a joint action when the partners’ attention is divided between multiple modalities of sensory communication (visual and auditory in [149]).

In addition to speech, humans use a number of other ways such as body stance and position, arm pose, gestures (with arm and/or hand), and gaze to communicate their intent to hand over an object and when/where the handover will take place. The presentation of an object, such as an extended arm and offering the object such that the free part is towards the receiver and tilting the object towards the receiver, are configurations that convey intent to pass an object [60, 36]. Cakmak et al. [36]

claim that such anticipation in the behaviour of the agents makes the interactions more fluent. An analysis of kinematic features could lead to an automatic detection of the intent to hand over an object, for example using machine learning classifiers

[177]. Another learning-based approach presented in [211] posits that the orientation of a person and joint attention (on the object or on the position where the handover will happen) are important cues for physical interaction. Similarly, statistical models have been used to model the physical aspect of a handover, and then endowed with a higher-level cognitive layer that uses non-verbal cues (head orientation) to better understand the intent of a human receiver to grasp the object [87]. Alterations of more common movements and arm trajectories can also be used to communicate during joint actions as well as a deviation in the normal pronunciation of a word [184]. Trajectories of motion can be altered in order to communicate to one’s partner [184]. Taking this to the extreme, some movements can be fabricated altogether in order to mislead one’s opponent in a competitive joint action, e.g., a footballer’s feint move [218]. Similarly, robots can devise deceptive motions too [67]. Moreover, the initial pose of a robot receiver can inform the human giver about the geometry of a handover [176]. We will focus more on the gestures and motions as action in Section III-E.

Gaze is also a very powerful tool for communicating the intent to act and for coordinating the action. Gaze is the ensemble of eyes, head, and body orientation that reacts to the joint action [159]. Human gaze supports the planning of actions of object manipulation, spotting positions (contact points) to which to direct a grasp [106]. Furthermore, there seems to be a link between action perception and execution. In other words, humans are able to read other people’s action intentions by observing their gaze [42]. Therefore, it is not surprising that during a handover, the use of gaze by a robot positively impacts the interaction, resulting in faster object reaching and a more natural perception of the interaction by the human receivers [209, 155, 81]. Similarly, gaze can have an effect on cooperation also in terms of faster human response times [30]. Interestingly, a deliberate delay in releasing the object by the robot results in an increase of attention to the robot’s head, and also an increase of the compliance with the robot’s suggestions (actualised with the robot’s head motions) [2]. A closely related concept is turn-taking, which helps humans communicate their understanding and control of the turn structure to a conversation partner by using speech, eye gaze, and body language. Turn-taking has been explored in human-robot interaction [52], and it would especially be beneficial for handovers if the scenario involves handovers in both directions: robot-to-human and human-to-robot.

Iii-B Grasp Planning

We have previously considered that during a handover, the giver plans their motions considering the task of the receiver. In particular, the giver considers how to grasp the object so as to offer it to the receiver in the best way possible, i.e., whenever possible, to minimise object manipulation by the receiver before using the object for its intended use [85]. This is an example of second-order planning for object manipulation, which is defined as:

… altering one’s object manipulation behaviour not just on the basis of immediate task demands but also on the basis of the next task to be performed [196].

If the planning takes into account more than two steps, then it is termed higher-order planning. In the case of a handover, the grasp of the giver could also account for the task to be performed by the receiver [151]. In effect, the grasp of the giver influences the grasp of the receiver, as the latter can only grasp the object on the unencumbered portion of the object. The grasp choice of the giver can influence whether a receiver can directly use the object for their task or must re-manipulate the object to be able to use it.

Factors such as object shape, object function and safety are important to consider when planning a grasp for a handover [109]. In a human user study, it was shown that when participants are handing over objects to each other, they tend to orient some objects differently when they were explicitly asked to consider the presentation that is most convenient to the receiver [47]. Similarly, object constraints and the receiver’s task are highlighted to be key factors in the choice of grasp by the giver [56]. In particular, grasp type and grasp location change to facilitate the grasp of the receiver on the object. Similar reasoning was already adopted for robot to human handovers in [7, 8]. However, the robotic giver acted knowing a priori the ‘appropriate’ parts of the objects and the human receiver did not have to perform any subsequent task with the objects. Learning by demonstration was proposed by the same authors as a possible method to further explore the semantic segmentation of objects for grasping [6]. Similar to this work, learning handover grasp configurations through observation of human behaviour has been shown to be a viable solution [46]. Using the concept of affordance axis, a method has been proposed for selecting good handover observation sets to learn grasp configurations [50]; however, while this works well with objects with one main grasp configuration, it is a more challenging problem when the object can be presented in multiple orientations, as the robot needs to see a larger set of possible configurations and then decide which is best in a given situation.

The grasping adaptation performed by the giver is in line with theories that consider grasping an inherently task-oriented or purposive action in humans [160, 71, 70], that involves both sensory and motor control systems [105, 190]. A human study shows that when participants took hold of a vertical cylinder to move it to a new position, grasp heights on the cylinder were inversely related to the height of the target position [58], which is a clear example of adaptation of the grasp to the task.

Humans display a wide range of grasps, and several taxonomies have been proposed to categorise human grasps based on specific aspects such as hand shape on the object, contact points, and pressure [62, 86, 72]. Humans choose their grasp considering many factors [125, 107, 101, 62]: object constraints (e.g., shape, size, and function), gripper constraints (e.g., the human hand or gripper kinematics and the hand or gripper size relative to the object to be grasped), habits of the grasper (e.g., experience and social convention), and environmental factors (e.g., the initial position of the object and environmental constraints [187]). While a successful grasp is usually characterised by stability [24] and/or speed [143], one aspect of robotic grasping that is often overlooked is the task to perform [27] and its requirements in terms of force and mobility [170]. There is further evidence that the reaching movement of the arm and the grasping movement of the fingers may also be influenced by the grasper’s goal [12, 11, 18, 134]. From this perspective, it is not surprising that the intention to cooperate influences the grasp choice during an interaction like an object handover.

As objects are passed to accommodate the receiver’s needs, the functional parts of the objects play a decisive role [82, 53, 29, 202, 216, 103, 173]. Gibson [82] coined the term “affordances” to define the possibilities for action offered by objects and their environment. Norman [168] added a perceptual dimension to the concept of affordance, associating it not only to the agent’s capabilities, but also to their tasks to perform. However, a clear functional part of an object, such as a handle of a screwdriver, can elicit different behaviours in single-agent scenarios and cooperative tasks [200, 228, 56]. For example, a single agent having to tighten a screw will grasp a screwdriver from the handle, whereas a giver wanting to hand over the screwdriver, should grasp it from the metal rod, thus offering the handle to the receiver. While this adaptation is natural to humans (having developed it through understanding and the repetitive use of the object), such understanding is somewhat still to be achieved in robots. A concerted effort in perception and action [26] is needed in order to endow robots with such capabilities [154, 92, 208, 166, 141, 64, 114, 43].

As already established, givers do reason about how to grasp the object and where to place their hand on the object. Givers consider which area of the object can afford the receiver’s subsequent task and adapt their grasp strategy accordingly. So much so that when the task of the receiver has fewer performance constraints, i.e., when the task of the receiver is as simple as placing the object on a table, there are less stringent constraints to perform the task and thus the exchange of the object can be more relaxed [56]. However, when the task of the receiver requires the use of the object in a very specific way (i.e., cutting a sheet of paper with a pair of scissors), then the grasp of the giver usually accounts for the constraints of the task of the receiver [56]. Similarly to the considerations about the grasp of the giver, different tasks and objects elicit different levels of constraints on the grasp that the receiver has to use. A planner for interactive manipulation tasks between robots could potentially account for both the grasp of the robotic giver and the grasp of the robotic receiver, thus enabling both robots to grasp successfully [132]. This approach is hardly extendable to human-robot handovers, as the human behaviour is more difficult to model with certainty. To overcome this problem, one option is to probabilistically model the behaviour of the human receiver, accounting for the ergonomic cost of the receiver, and thus influencing the grasp of the receiver [22].

Iii-C Perception

A big challenge in handovers is a reliable perception of the object, the hand (self and partner) and the partner’s full-body motion. Some approaches try to track object and hand to plan for the grasp [90, 235, 88], leveraging large datasets for training and physical relationships between hand and object. While grasping, the hand and objects can become severely occluded, thus harder to track with vision sensors. Differently, this problem can be addressed as a grasp classification problem [232], in which common human grasps for the task of human-robot handover are divided into categories such as “waiting” or “lifting”, inspired by the human grasp taxonomy [72]. The grasp class information can then be used by a planner to devise the most appropriate approach and grasp strategy for the robot receiver. However, the classification of grasps suffers the drawback of detecting only a relatively small subset of grasps, thus failing to detect the richness of behaviours displayed by humans. The human body can also be tracked in addition to the object and the human hand in order to improve safety [197]. We will discuss the safety aspects of handovers in more detail in Section III-E2.

While the perception of the human partner’s hand and body is critical real-time feedback, there have been efforts also in predicting the human partner’s motion. Dynamic Movement Primitives (DMPs) [102, 201]

have been used successfully to predict human motion (point attractor and time scale, which mean handover location and time), coupled with an Extended Kalman Filter

[226]

. Real-time estimation of human motion can also leverage the concept of minimum jerk trajectories

[138]. The minimum jerk model can be used in conjunction with regressors to predict when and where a human giver will transfer an object [130]

. The minimum jerk model is used with a Semi-Adaptable Neural Network to predict human arm motion in

[121]. Gaussian Processes can also be used for proactively estimating human motion for handovers [126]. Luo et al. [135]

propose a 2-layer framework using Gaussian Mixture Models and Gaussian Mixture Regressor to represent and predict human reaching motions.

Iii-D Handover Location

The handover must occur in a location that is reachable by both agents. Thereby, an aspect that deserves thorough analysis is the handover location. Human-human handovers have been shown to occur roughly midway between giver and receiver [89]. Thus, the interpersonal distance between the agents has a fundamental influence on the location of the handover, and on the height of the point of exchange [162]. Conversely, the object mass seems not to affect the location of the exchange, but rather the duration of the exchange. Leveraging on this notion, a task-specific interaction workspace can be built as the intersection of the spaces that can be accessed by robot and human [220]. Information such as the effort needed by the human to reach a certain location can be used in an on-line manner to shape the interaction workspace, in order to plan the robot’s movements. Similarly, handover locations can account for biomechanical properties of the human receiver, such as height, weight, strength and range of motion [214]. These considerations of the biomechanical properties of the human partners are especially critical when there are environmental or task constraints to limit the motion of the human (like in the case of the mechanic under the car) and when the human is motor-impaired. Furthermore, optimising the robot’s motions over safety, acceptability and task constraints could help improve the posture of the human receiver [34], thus decreasing the chances of musculoskeletal disorders and discomfort [188]. The human mobility could also be accounted for while planning, to devise different paths for the robot to the handover location [144, 224]. Incorporating models of the kinematics and the dynamics of the body of the human receiver can effectively devise handover locations that are more acceptable to the human partner [179]. Finally, the human arm manipulability could also be embedded in an optimisation framework to reduce muscular strain [182, 183]. All these considerations point to the fact that the handover location should not be fixed a priori, which would force the human to adapt to the robot. Instead, such location should be planned for considering all the factors mentioned above, and then potentially modified in an on-line fashion if need be, for example, if the environment is dynamic and the human agent moves, thus adapting to the human.

Iii-E Motion Planning and Control

During a joint action, the movements of the agents simultaneously actualise the physical joint action and signal important information for the coordination. However, movements are importantly the actualisation of a handover. Movements during human-human handovers are generally smooth rather than being separate and successive phases [17]. For example, receivers usually start the reaching movement toward the givers while the giver reaches out for the receiver (in a concurrent motion) [230, 150]. As such, the dominant aspects of successful movements in the context of a joint action like a handover are: legibility, predictability, safety, robustness, reactivity, and context awareness.

Iii-E1 Legibility and Predictability

Legibility and predictability relate to how easy it is for one agent to understand and predict the other agent’s movements. Albeit similar, legibility and predictability are not synonyms [66]. Using a psychological interpretation of actions, legibility is a characteristic of motion that enables an observer to infer the goal (action-to-goal). On the other hand, predictability is a characteristic of motion that matches what an observer expects given the knowledge of the goal (goal-to-action). By this definition, motions of collaborative robots must be legible, thus allowing the partner to quickly and reliably predict the goal of the actions of the robot. Interestingly, humans prefer robot configurations that are more natural or human-like as they are more readable [35]. Inverse kinematics algorithms mapping Cartesian motions to the robot’s joint space can also aim at devising overall movements for the robot that are legible to the human partner [13, 171].

Iii-E2 Safety

The safe planning of motions while approaching a human partner is also a critical aspect during a joint action. In the context of a handover, safety is a multi-faceted concept that revolves around the physical safety of the human partner. Safety can be ensured (or achieved) through software and/or hardware [5, 199]. More generally, safety is a pivotal topic in the whole human-robot interaction field of research. Research222We refer the reader to the results of the project SAPHARI, European Community’s 7th Framework Programme, IP 287513, call FP7-ICT-2011-7 has led to the standard ISO/TS 15066:2016 that regulates collaborative robots and contains the norms of appropriate behaviour during physical human-robot interaction. Collaborative robots promise an intrinsic safety by ensuring the hardware is safe to interact with. Furthermore, safety can also be increased through software [5, 199]. Motion planning and control should be framed in a way that safety risk is minimised throughout the entire interaction. Some approaches proposed danger criteria and attempted at minimising this metric. For instance, in [118] the robot is kept in low inertia configurations in case of unanticipated collisions; moreover, the chance of collision is reduced by distancing the robot’s centre of mass from the human. Similarly, a metric of distance from the operator is used in the optimisation in [234], and safety barrier functions are built around the robot links to allow collision-free planning [122]. Motion planning should devise safe, reliable, effective and socially acceptable motions [207, 117]. Frontal approach versus lateral approach by the robot towards the human receiver is discussed with some contrast in [111, 225]. Such considerations are further used to develop the planner in [207], which is composed of three components: spatial reasoning to account for the human receiver (perspective placement [146]), path planning optimising over costs that account for safety, visibility and human arm comfort (human-aware manipulation planner [206]), and trajectory control to ensure minimum-jerk motions at the end effector (soft motion trajectory planning [32]). Humans minimise jerk in order to realise well-behaved trajectories for arm movements [77]. Minimum-jerk motions by a robotic giver also result in shorter reaction time and faster adaptation for human receivers [100]. Further, to better match the human trajectories of minimum jerk, a decoupled minimum jerk trajectory could be used, using different time constants in the gravity axis z (thus decoupling the motion in the x-y plane to the motion in the z plane) [99].

Iii-E3 Robustness, Reactivity and Context Awareness

The robot’s motions should be flexible to accommodate changes in the environment, and to accommodate behaviours of different partners. To this end, principles such as robustness, reactivity, and context awareness should guide the design of human-robot interaction systems [83]. From this perspective, a fully pre-planned motion falls short of general adaptability. In other words, a fully deterministic approach to planning is only possible if the environment is fully known, as in the case of robot-to-robot handovers [189]. Instead, a mixture of planned motions and control over sensory feedback aids to modify the motions and adapt to the partner. A switching planning mechanism that mixes global and local planning can help to overcome the drawbacks of fixed planned motions [147]. Fast responsiveness of the robot giver is particularly important as it increases the positive impression of the interaction [112]. Interestingly, a human study suggests that the speed of the interaction might be more important than the spatial accuracy of the robot for the subjective experience of a human receiver [113]. When the robot acts as a receiver, adaptive reaching displays better performance compared to a fully pre-planned reaching motion in terms of predictability and aggressiveness, both being important quality measures of a handover [152]. Humans adapt their actions to account for the workload of their partner [98]. Similarly, a robot should be aware of the task status [91]. For example, a more proactive robot giver could increase the speed of the handovers, negatively impacting the user experience. On the contrary, coordinating a reactive robot could be perceived as a better user experience, even if the performance deteriorates [98].

While pure planning usually devises a feedforward trajectory to follow, control architectures provide the means to use sensorial feedback and change the behaviours of the robot. Impedance control and admittance control are two common strategies to use in physical human-robot interaction [96, 10, 175, 172, 73]. Variants of classical approaches include using redundancy and null space [74, 65, 75], modelling the interactive forces [139, 140] and parameter adaptation [124, 123]. Early work on control proposes to use fuzzy logic on three aspects: relevance, confidence and effect [3]. Human-human handovers show a smooth and fluid continuum of motion. For this reason, rather than switching control paradigms between handover phases, a phaseless controller (no distinction between reaching, passing and retracting) could be based on insights about the human behaviour, e.g., existence of motion during the passing and existence of coupling between the movements of the giver and those of the receiver [150]. However, one specific implementation of such a controller in [150] assumes that the object mass is known, in order to best modulate the grip forces.

Dynamic Movement Primitives (DMPs) [102, 201] represent an alternative to both pure feedforward and pure feedback control during an interaction. To specifically target a handover, the feedforward part can be weighted more at the start of the motion (shape-attraction), and subsequently the feedback (goal-attraction) can be weighted more as the interaction nears the physical exchange of the object [186]. In order to generate a wider range of behaviours during interactions, Interaction Primitives (IPs) build on the framework of DMPs and maintain a distribution over their parameters [21]. Probabilistic motion primitives [84] are shown to allow a robot to recognise human intent (task) and at the same time, generate commands for a robot according to the observed human motions, achieving coordination [137]. In this way, planning is replaced by inference on the probabilistic model. Learning from human feedback might also improve the adaptability of handovers. For example, in a contextual policy search, a robot could learn a reward function from human preference feedback [120].

Iv Physical handover phase

This phase encompasses the physical interaction between giver and receiver and the object transfer. During this phase, both players are physically and cognitively engaged. Entering this phase, the giver possesses the object thus controls its stability. After the occurrence of the physical contact, the giver can couple vision and force feedback to understand to which extent the receiver has grasped the object. At this point, the giver starts releasing the object in order to allow the full transition of the object to the receiver. The timing must be coordinated as an early release can cause the object to fall; and a late release can cause higher interaction forces [51].

The receiver approaches this phase by planning a grasp on the object given the visual feedback from the actions of the giver. Given the presentation of the object, the receiver then acts and places the hand on the object to maximise the stability of the grasp and also in the most appropriate way to be able to perform their task afterwards. The transition ends when the giver entirely releases the object to the receiver, who then acquires the object in full. In this phase, the success of a handover is dictated by the coordination of the when and where of the joint action. For this reason, the most crucial aspect of this phase is the modulation of the grip force to complete a safe transfer of the object.

Iv-a Grip Force Modulation

In line with literature in neuroscience and psychology, the joint action of the physical handover is an interplay of anticipatory control and somatosensory feedback control [148]. Visual feedback augments the anticipatory control in starting the release of the object, by predicting and detecting the collision created by the hand of the receiver on the object [59]. Visual feedback is also used to adapt predictions to different speeds of the receiver’s reaching out movements. From this perspective, the speed of the grip force release seems to be correlated with the reaching velocity of the receiver (i.e., the faster the approach, the faster the giver releases the object) [59]. Giver and receiver show similar strategies for controlling their grip forces with respect to the evolution of the load forces generated by the object and the exchange. Compared to other schema, forces arising during the release are different when a robot acts as receiver. In fact, the faster the retraction of the robot after grasping the object (still in the partial hold of the human giver), the larger the interaction forces. This might be explained as the giver does not have enough time to withdraw [176]. All of these findings point to the fact that the giver is in charge of the safety of the object, while the receiver modulates the efficiency of the object exchange [148, 51].

The task of the giver is shown to resemble the evolution of a picking up task [51, 49] in that the giver, like the picker, typically will use excess grip force to ensure that the object does not slip or drop. Moreover, in [51] a linear relationship between grip force and load force is observed, except when either actor is supporting very little of the object load [51]. An analysis of these grip forces reinforces the idea that the giver is responsible for the safety of the object during the transfer, while the receiver is responsible for the timing of the transfer. The same control strategy can also be applied to an under-actuated hand, using linear models leveraging force readings from the elbow of the robot [45]. Moreover, the feedback from a force sensor mounted on the robot’s wrist can be robustly used to modulate the release of an object [116].

Iv-B Error Handling

Another task for both the giver and the receiver is the handling of errors and disturbances during the handover. For example, there might be cases where the receiver makes unwanted contact with the object and the giver should not release the object. The contact forces exerted by the receiver should then be recognised as disturbances and should be compensated to maintain a stable grasp on the object. The tactile information from a Shadow Robot hand is used in [69] to build probabilistic models to detect these disturbances and feed them back to an effort controller. Another threat to safety is a potential fall of the object. It has been found that human givers tend to primarily rely on vision rather than haptic sensing to detect the fall of the object during handovers [181]. Thus, the object acceleration measured with an optical sensor at the gripper can be used as an indicator of handover failure (object dropping) [180]. More recently, force control and fuzzy control were similarly used [163, 164].

V Metrics

There is a general consensus on the need for standardised measurement tools and metrics in the human-robot interaction and collaboration communities [210, 9]. However, the spectrum of aspects to cover is so broad that finding a set of metrics and tools to adopt in every situation is very difficult. Nevertheless, such common and codified metrics would allow for an easier and fairer comparison among the proposed techniques, and would possibly help to build new frameworks. Metrics should aim to assess a handover qualitatively and quantitatively [210]. Along the same lines, a survey on metrics for human-robot interaction [158] reports productivity, efficiency, reliability, safety and co-activity to be the areas to assess for an interaction. Furthermore, there is a wide range of literature analysing metrics for human-robot interaction and collaboration, such as for human-robot teams [97, 185, 33, 167] and for social and physical interaction [16, 44, 221].

In this section we analyse three different types of metrics: 1) task performance metrics which provide a measure of success, 2) psycho-physiological metrics to measure the human partner’s physiological responses, and 3) subjective metrics in the form of user questionnaires. These metrics are represented graphically in Figure 3. We also analyse the variety of the test objects used in handover experiments.

V-a Task Performance Metrics

Task performance metrics are often utilised in HRI experiments to evaluate a measure of success quantitatively and the choice of such metrics is highly dependent on the task. 28 out of 38 papers surveyed reported a measure of task performance for the handover experiments; the full list can be seen in the “Task Performance” column in Table I.

The performance of a handover can be coarsely described using the success rate: number of successful handovers divided by the total number of trials. The success rate metric is the most popular task performance metric for human-robot handovers [68, 54, 152, 36, 205, 212, 87, 112, 186, 137, 197, 232]. Even though the overall success rate of an implementation is important, it only reports a statistical view of the handovers rather than the quality of the interaction, and by itself it does not explain why and how the errors have occurred. Besides, different experimental protocols used by each research team make it difficult to compare the success rate metrics directly. The interaction force is another measure that has been commonly used to evaluate the success of the interaction [49, 180, 69, 150, 116, 176].

Considerations of performance also include the task completion time. From this perspective, fluency is an important characteristic of an interaction such as the handover. To evaluate fluency, objective metrics should include percentage of concurrent activity, human idle time, robot idle time and robot functional delay [93, 94, 95]. These concepts are also related to task effectiveness and interaction effort [169]. Moreover, time considerations can include the reaction time of the human, and also task completion time and overall handover time. Among the surveyed handover papers, several have reported time-related metrics, including the wait time of the robot and the human [36], total handover time [152, 28, 212, 87] and timing of different phases of the handover [7, 8, 155, 59, 232]. Other task performance metrics used in handovers include defining and minimising a cost function [144, 22, 137], relating either to the trajectory or to the interaction.

V-B Psycho-Physiological Metrics

Aside from the task performance metrics, another way to gather quantitative data from user studies is to measure the physiological responses of the human partner during the interaction. In HRI, psycho-physiological measures can be used to identify and evaluate the human partner’s responses to the interaction with the robot [23]. Physiological signals such as electromyography (EMG) can be used to measure the human’s motor activity during the handover. Physiological signals can also be used to estimate the affective state of a human partner during an interaction.

Furthermore, physiological responses can be exploited when evaluating responses to a safe planner (less anxiety and surprise, reported feeling more calm) [119]. Another example is

          Paradigm   Pre-Handover     Physical Handover     Metric    
                      User Experience    
    Year     Paper  
R2H
H2R
 
Comm.
Grasping
Motion
Perception
 
Handover
Location
 
Grip Force
Modulation
Error
Handling
 
Post
Handover
Task
 
Task
Performance
 
Physiological Subjective  
# of
Test
Objects
 
  2007   Edsinger[68]       Pre-planned         1  
  2009   Choi[54]       Pre-planned         3  
  2009   Huber[99]       Fixed         1  
  2010   Sisbot[207]       Pre-planned         1  
  2011   Dehais[63]       Pre-planned         1  
  2011   Cakmak[35]       Pre-planned         5  
  2011   Cakmak[36]       Fixed         1  
  2011   Micelli[152]       Pre-planned         2  
  2011   Bohren[28]       Fixed         1  
  2012   Mainprice[144]       Pre-planned         2  
  2012   Aleotti[7]       Pre-planned         2  
  2013   Chan[49]       Fixed         1  
  2013   Shi[205]       Online         1  
  2013   Yamane[230]       Online         1  
  2013   Grigore[87]       Fixed         1  
  2014   Aleotti[8]       Pre-planned         3  
  2014   Moon[155]       Fixed         1  
  2014   Koene[112]       Online         1  
  2014   Prada[186]       Online         1  
  2014   Koene[113]       Online         1  
  2014   Chan[45]       Fixed         1  
  2014   Chan[48]       Fixed         3  
  2015   Admoni[2]       Fixed         1  
  2015   Suay[214]       Pre-planned         3  
  2015   Huang[98]       Fixed         1  
  2016   Parastegari[180]       Fixed         1  
  2016   Maeda[137]       Online         4  
  2016   Vahrenkamp[220]       Online         1  
  2016   Kupcsik[120]       Fixed         1  
  2016   Medina[150]       Online         1  
  2017   Eguiluz [69]       Fixed         3  
  2017   Bestick[22]       Pre-planned         1  
  2017   Peternel[182]       Online         1  
  2017   Konstantinova[116]       Fixed         6  
  2018   Controzzi[59]       Fixed         1  
  2018   Pan[176]       Online         1  
  2020   Yang[232]       Pre-planned         1  
  2020   Rosenberger[197]       Pre-planned         13  
  2006   Kulic[119]                
  2009   Bartneck[15]                
  2019   Hoffman[95]                
TABLE I: Comparison of the papers presenting novelty about human-robot handovers and experiments with a real system. This table reports: the paradigm (robot-to-human R2H or human-to-robot H2R); the aspects of the pre-handover phase under investigation (communication, grasping, motion planning and control, perception); whether the handover location was fixed or fully pre-planned or adapted online; whether grip force was regulated and error handling dealt with during the physical handover; whether the experimental protocol foresaw a post-handover task; which metrics were utilised for the evaluation of the experiments (task metric, physiological, subjective); and how many objects were used in the handover tests.

Heart Rate Variability (HRV), which can be used as a quantitative index to assess mental fatigue [223]. The psycho-physiological measures to assess anxiety and stress in response to the interaction include, but are not limited to: eye movement; heart rate and heart rate variability; blood pressure; electroencephalography; skin conductance response; urinary tests; pupillary dilation; respiratory rate and amplitude; muscular activity; corrugator muscle activity; electromyography.

Our survey shows that only a few works used psycho-physiological metrics in human-robot handover experiments. [182] and [214] utilised EMG to measure the muscle activity in the human arm during the handovers while [63], in addition to EMG, also made use of galvanic skin resistance and eye-tracking.

V-C Subjective Metrics

Subjective metrics assess aspects such as the subjective perception of the human regarding the perceived difficulty of the task, the cooperation and alliance of the robot, trust in the robot and contribution of the robot [95]. Additional concepts that are recurrent in a qualitative evaluation of an interaction include anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety [15]. The Robotic Social Attribute Scale (RoSAS) framework proposes measuring the subjective and social perception of robots using three dimensions: warmth, competence and discomfort [41]. Legibility, safety and physical comfort are also key criteria to consider [63]. Furthermore, ad-hoc questionnaires and the NASA-TLX 333https://humansystems.arc.nasa.gov/groups/TLX/ can be utilised to provide additional instruments to assess the cognitive workload of humans.

The majority of the surveyed papers with a real-world handover implementation (22 out of 38) have conducted a user study (the list can be seen in the “Subjective” column of Table I). The most common vehicle for user studies were post-study surveys, in which the participants rated different aspects of their interaction in a Likert scale. The most commonly asked questions in the questionnaires relate to the fluency of the interaction (i.e., natural, legible, predictable robot motions) [99, 63, 35, 152, 212, 155, 59, 95, 232], how safe [99, 63, 7, 8, 186] and comfortable [99, 63, 7, 8, 112, 186, 113, 176] the participants felt during the interaction, whether participants were satisfied with the experience [54, 112, 186, 113, 22, 59], the ease of use of the interface [54, 112, 186, 113, 180, 22], the competence of the robot [35, 120, 22, 176, 232], the appropriateness of the robot’s timing [152, 212, 155, 59], the perceived aggressiveness of the robot [152, 212], the trust in the robot [22, 95, 232] and whether the robot acted in a human-like manner [99, 8]. In addition, for some papers the main subjective evaluation was the indication of preference and/or subjective opinions and comments from the participants [36, 49, 155, 2, 180, 22].

Fig. 3: Mind map of the metrics. Metrics can assess not only the overall performance of a handover, with measures such as timing and success rate; but also the user experience, with psycho-physiological measures, such as heart rate variability, eye movement etc., and subjective measures, such as trust in the robot, working alliance and safety.

V-D Test Objects

There have been recent efforts in the grasping community to create physical benchmarks and experimental protocols in order to facilitate the replication of research results [142, 127, 20, 161]. Towards the same goal, object datasets have been generated for grasping, such as YCB: an object dataset [38, 39, 37]; DexNet: a synthetic dataset of 6.7 million point clouds, grasps, and analytic grasp metrics [141] and EGAD: a dataset with procedurally generated objects with varying grasp difficulty [157].

The choice of objects used in human-robot handovers usually depends on the target application; for example it differs for industrial and domestic environments. The last column of Table I shows how many classes of test objects were used for the experiments in the surveyed papers. We found that 26 out of 38 papers used only a single object class for the experiments. The most commonly used objects were cylindrical objects such as bottles [207, 63, 35, 36, 152, 28, 230, 155, 112, 186, 113, 2, 214, 120, 150, 69, 22, 197, 45, 137, 116], followed by rectangular objects such as boxes[68, 99, 152, 98, 69, 232, 197, 180, 137, 116]. While some researchers opted for custom-designed objects with sensors mainly for measuring grip and load forces [87, 49, 176, 45], some have chosen application-specific objects such as handing out flyers [205].

Vi Discussion

In this paper, we analyse in detail the two phases of a handover, reviewing the results from human studies and the corresponding state of the art in the robotic literature. We summarise the salient features of the papers implementing physical human-robot handovers in Table I. For each paper we report: the paradigm (robot-to-human or human-to-robot); what the authors investigated (communication, grasping, motion planning and control, and perception during the pre-handover phase; grip force and error handling during the physical handover); whether the handover location was fixed, pre-planned accounting for aspects such as the ergonomics, or adapted online to the human partner; whether the experimental protocol included a post-handover task for the receiver; the metrics used to assess the task performance and the user experience; and finally the number of different objects used in the real robot experiments.

Vi-a Adaptability and Handover Location

Studies in neuroscience, physiology and psychology highlight that a handover is an intricate joint action that requires a physical level and a cognitive coordination. In particular, it seems apparent that the cognitive level of the interaction is as important as the physical level [128], for a robot to be considered as a partner, and not only as a tool [31]. To match the human skills of understanding and adaptation [68], it is preferable that robots also display adaptation and understanding. In fact, human givers can control the object’s position and orientation to facilitate the robotic receiver’s grasping of the object [68]. However, fatigue of the human worker having to accommodate the robot repeatedly and for a long time can become an issue [183]. From this perspective, robots that are able to adopt different behaviours adapting to their partner could assist their human counterpart [112, 98]. In other words, it is crucial to account for the feedback coming from the human partner during the interaction when controlling the robot. However, as can be seen in Table 1, to date most approaches focus on fixed or pre-planned handover locations, with far fewer attempts at adapting to the human partner online. In many human-robot handover scenarios, the handover location is kept fixed, i.e., the robot is either going always to the same position for the object transfer, or the handover location is pre-planned based on several criteria (including ergonomics, safety, etc) and not updated in real-time with the perceptual feedback. This is far from ideal, as the human has to potentially adapt to the robot and could incur cognitive and physical fatigue. The ergonomics of the interaction should be accounted for, as the transfer should happen in the comfort zone of the human, i.e., the range of positions (and tasks to perform in) reachable (and doable) with little or no compensatory movements [115]. In humans, an optimisation principle over a muscle stress index is shown to determine the arm motions and postures (selected over the infinite possibilities of motion) and also the perceived comfort [108]. We believe that while pre-planning such a location accounting for the ergonomics and the physical characteristics of the human partner is appropriate, more effort is needed in order to adjust to changing circumstances (adapting in real-time to the needs of the human partner).

Vi-B Communication

Communication is a key factor to achieve a successful coordination during a joint action. Humans use speech, gaze, and body movements to communicate intent and coordinate during the execution of the joint action. We observe that robots have displayed a general lack of communication skills for object handovers. Most of the effort in the literature so far has been put on the physical aspects of the interaction, focusing on motion planning and control, grasping and perception. On the other hand, effort in communication is less prevalent, as only 37% of the papers in Table I include an element of communication in their implementation.

Vi-C Grip Release

There are also only a few papers that focus on grip release and how to handle potential falls of the object. While the literature in human studies continues to investigate how both agents modulate their grip force on the object and how the different sensory modalities (vision and tactile) come into play, most of the reviewed work has adopted a simplistic approach, i.e., robotic givers completely release the object whenever a pull by the receiver is detected. Conversely, robotic receivers need to modulate their pulling force as too little force could be unsafe for the object transfer and too much force could be dangerous for the human partner. We believe that grip force modulation is a key component that needs further investigation and effort. There are many additional open research directions, such as: (i) the use of different hardware (under-actuated vs fully actuated, soft vs rigid, parallel jaw gripper vs multi-fingered hands); (ii) the use of different grasping strategies (grasp type and location on the object); and (iii) the use of objects varying their size and weight. Such exploration could thus give rise to various options to modulate the grip.

Vi-D Paradigm

In terms of the paradigm, Robot-to-Human (R2H) handovers have been more frequently investigated (87% of surveyed papers in Table I), compared to Human-to-Robot (H2R) (29%), while only 16% reported experiments in both directions. We speculate that the idea of having a robot assistant that can fetch objects and give them to humans when needed, has driven the deeper investigation of the R2H paradigm. The R2H paradigm is particularly representative of the cases where the human receiver will then perform a cognitively challenging task with the object, a task that robots are not yet able to perform. However, it is our opinion that H2R handovers are worth exploring more and represent an open area of research. One of the biggest challenges in human-to-robot handovers is safety [197], as the robot should be careful to not contact the human giver. For this to happen, perception systems should be able to robustly discriminate the human giver (hand and arm) from the object [232, 197]. Moreover, in the H2R paradigm grasp planning becomes another critical issue, as the robot will have to perform a task with the handed-over object, i.e., at the very least need to put the object down in a pose preferable to humans [165].

Vi-E Role of the Task

From the robot’s higher-level behaviour standpoint, there is a critical need for improvement in the integration of cognitive and physical reasoning [178] in both paradigms (H2R and R2H). In other words, robots currently lack a vision and understanding of the general goal of such an action. Such understanding is the key contributor to enabling higher-order planning [85, 196, 151, 56]. For example, robotic grasping has achieved peaks in performance [156] [1] [195] [129]; however, the ultimate goal of the grasp is rarely taken into account [170]. As a result, robots can manage to grasp objects but seldom these grasps would allow the execution of a task with the objects. Considering a handover, a successful grasp should account for the interaction partner and this is a key feature that a robot should display in order to be fully collaborative.

Following a similar reasoning, we believe that any experimental protocol should include a task to perform by the receiver with the handed-over object, as proposed in [56]. This is a critical consideration because the object exchange is normally initiated in order for the receiver to perform a task with the handed-over object. A complete experimental procedure should consider the capability of the receiver to use the object directly following the handover. If the receiver can grasp the object in a way that its subsequent use does not require any further in-hand or bi-manual re-manipulation, then the receiver can start the task straight away after the physical exchange of the object. Conversely, the receiver might need to re-adjust their grasp of the object in case their temporary grasp (realised during the exchange) is not an ideal grasp to correctly use the object for the specific task. However, this post-handover grasp adjustment could decrease the quality of the handover in objective terms (longer task performance time, higher strain when the handover happens multiple times) and in subjective terms (the giver could be perceived as a lesser partner, and the task could be perceived as more cognitively difficult). These quality evaluations are pivotal in establishing the degree of success of a handover, as considered in the Section V on metrics. However, very few experimental protocols include a posterior task for the receiver (18% of the surveyed papers). Even though it might be argued that a handover can be considered finished after the object transfer, we believe that such task performance is important to effectively assess the overall performance of the dyad and gauge the experience of the human partner, particularly for the R2H paradigm.

Vi-F Proposed Set of Metrics

Our survey has revealed a need for standardisation in the choice of metrics and objects for real robot experiments. Most of the surveyed papers in Table I report results using task performance metrics (e.g., success rate and timings) and subjective metrics on the experience of the human partner (often in the form of Likert-scale post-experiment questionnaires). The last three rows of the table report papers that focus purely on metrics that have been used for handovers. We believe that a minimal set of metrics should be defined in order to enable a fairer and more direct comparison among the different approaches. To this end, we propose the following combination of metrics that assess the most common aspects of a handover:

  1. Task performance (objective): success rate, total handover time, receiver’s task completion time.

  2. Experience of the human (subjective): fluency, trust in robot, working alliance.

This minimal set includes metrics which are clearly defined, thus reproducible, and which are easy to measure. For these reasons, the set does not include psycho-physiological measurements as they require sensors placed on the body of the human participant, and thus are difficult to standardise and deploy in a variety of contexts.

The experience of the human participant should be assessed administering the following questionnaire (the following set of questions includes a subset of questions from [95]):

  1. Human-Robot Fluency

    • The human-robot team worked fluently together.

    • The human-robot team’s fluency improved over time.

    • The robot contributed to the fluency of the interaction.

  2. Trust in Robot

    • I trusted the robot to do the right thing at the right time.

    • The robot was trustworthy.

  3. Working Alliance

    • The robot accurately perceives what my goals are.

    • I understand what the robot’s goals are.

    • The robot and I are working towards mutually agreed upon goals.

All questions should be evaluated on a Likert scale. We believe that this set of questions covers a broad set of important general aspects of the interaction, namely fluency, trust and working alliance. Furthermore, additional questions can be added to this minimal set in order to investigate additional specific aspects of a handover, such as preference between different approaches, and learning/improvement over time.

Vi-G Objects

68% of the surveyed papers in Table I used only a single object class and only 10% of the papers used four or more object classes. This observation shows that generalisation of handovers to a variety of objects has not been the main focus of a majority of the papers until very recently [197]. The most commonly used test objects have been either cylindrical objects such as bottles (55% of the surveyed work) and rectangular objects such as boxes (26% of the surveyed work). This is likely because these object shapes are easier to grasp and many everyday objects belong to those categories. We support the practice of a broader set of objects, as different objects generate different behaviours and can be used to address different manipulation tasks. We propose the use of objects that elicit all three grasp macro-types in [62], i.e., power, intermediate and precision grasps. The three macro-types offer sufficient opportunities to explore different behaviours, investigating aspects such as different object offering and reception; different post-handover tasks; and handover of objects with different weights and shapes. Nevertheless, we acknowledge that the choice of objects might depend on the specific focus of each study. For example, studies on the reaching motion might place their focus on the motion and not on the objects, so three objects evoking the three grasp types would be enough. However, studies more focused on the objects, such as a study on object orientation in the preparation to hand over, would require a wider set of experimental objects.

Our proposition of a minimal set of metrics and of objects to use in an experimental protocol is targeted to increase the possibility of fair comparison among the approaches. A handover is a sophisticated joint action that includes many different aspects (communication, planning, grip release, etc). For this reason, there has been a general non-uniformity in protocols and metrics. We believe that our proposition of metrics and objects covers the most common aspects of a handover, thus enabling a fair comparison among approaches, while allowing for additions when the research questions call for investigation into more specific aspects.

Acknowledgment

The authors want to thank the Australian Research Council (Centre of Excellence for Robotic Vision (project number CE140100016)). Elizabeth Croft acknowledges support from Australian Research Council (project number DP200102858) and the Natural Sciences and Engineering Research Council of Canada (project number RGPIN-2017-04450). Tommaso Pardi is supported by a doctoral bursary of the UK Nuclear Decommissioning Authority. The authors want to thank: Peter Corke for his invaluable feedback; Alessandro De Luca for the good advice about safety in physical human-robot interaction; Marco Controzzi for the discussion on the role of the task in experimental protocols.

References

  • [1] M. Adjigble, N. Marturi, V. Ortenzi, V. Rajasekaran, P. Corke, and R. Stolkin (2018-10)

    Model-free and learning-free grasping by local contact moment matching

    .
    In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, pp. 2933–2940. External Links: Document, ISBN 978-1-5386-8094-0 Cited by: §VI-E.
  • [2] H. Admoni, A. Dragan, S. S. Srinivasa, and B. Scassellati (2014) Deliberate delays during robot-to-human handovers improve compliance with gaze communication. In ACM/IEEE International Conference on Human-Robot Interaction, pp. 49–56. External Links: ISBN 9781450326582, Document Cited by: §III-A, §V-C, §V-D, TABLE I.
  • [3] A. Agah and K. Tanie (1997) Human interaction with a service robot: mobile-manipulator handing over an object to a human. In Proceedings of International Conference on Robotics and Automation, Vol. 1, pp. 575–580 vol.1. Cited by: §III-E3.
  • [4] A. Ajoudani, A. M. Zanchettin, S. Ivaldi, A. Albu-Schäffer, K. Kosuge, and O. Khatib (2017) Progress and prospects of the human-robot collaboration. Autonomous Robots. External Links: Document Cited by: §I, §I.
  • [5] R. Alami, A. Albu-Schaeffer, A. Bicchi, R. Bischoff, R. Chatila, A. De Luca, A. De Santis, G. Giralt, J. Guiochet, G. Hirzinger, F. Ingrand, V. Lippiello, R. Mattone, D. Powell, S. Sen, B. Siciliano, G. Tonietti, and L. Villani (2006) Safe and dependable physical human-robot interaction in anthropic domains: state of the art and challenges. In IEEE/RSJ International Conference on Intelligent Robots and Systems, Cited by: §III-E2.
  • [6] J. Aleotti and S. Caselli (2011) Part-based robot grasp planning from human demonstration. In IEEE International Conference on Robotics and Automation (ICRA), External Links: Document Cited by: §III-B.
  • [7] J. Aleotti, V. Micelli, and S. Caselli (2012) Comfortable robot to human object hand-over. In IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Cited by: §III-B, §V-A, §V-C, TABLE I.
  • [8] J. Aleotti, V. Micelli, and S. Caselli (2014) An affordance sensitive system for robot to human object handover. International Journal of Social Robotics 6 (4), pp. 653–666. Cited by: §III-B, §V-A, §V-C, TABLE I.
  • [9] A. Aly, S. Griffiths, and F. Stramandinoli (2017) Metrics and benchmarks in human-robot interaction: recent advances in cognitive robotics. Cognitive Systems Research 43, pp. 313 – 323. External Links: ISSN 1389-0417, Document Cited by: §V.
  • [10] R. J. Anderson and M. W. Spong (1988) Hybrid impedance control of robotic manipulators. IEEE Journal on Robotics and Automation 4 (5), pp. 549–556. Cited by: §III-E3.
  • [11] C. Ansuini, L. Giosa, L. Turella, G. Altoè, and U. Castiello (2008) An object for an action, the same object for other actions: effects on hand shaping. Experimental Brain Research 185 (1), pp. 111–119. Cited by: §III-B.
  • [12] C. Ansuini, M. Santello, S. Massaccesi, and U. Castiello (2006) Effects of end-goal on Hand shaping. J Neurophysiol 95, pp. 2456–2465. External Links: Document Cited by: §III-B.
  • [13] A. Aristidou and J. Lasenby (2011) FABRIK: a fast, iterative solver for the inverse kinematics problem. Graphical Models 73 (5), pp. 243–260. Cited by: §III-E1.
  • [14] Australian Centre for Robotic Vision (2018) A robotics roadmap for australia. External Links: Link Cited by: §I.
  • [15] C. Bartneck, D. Kulic, E. Croft, and S. Zoghbi (2008-01) Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. International Journal of Social Robotics 1, pp. 71–81. External Links: Document Cited by: §V-C, TABLE I.
  • [16] C. Bartneck, D. Kulić, E. Croft, and S. Zoghbi (2009) Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. International journal of social robotics 1 (1), pp. 71–81. Cited by: §V.
  • [17] P. Basili, M. Huber, T. Brandt, S. Hirche, and S. Glasauer (2009) Investigating human-human approach and hand-over. In Human Centered Robot Systems. Cognitive Systems Monographs, pp. 151–160. External Links: Document Cited by: §III-E.
  • [18] C. Becchio, L. Sartori, M. Bulgheroni, and U. Castiello (2008) Both your intention and mine are reflected in the kinematics of my reach-to-grasp movement. Cognition 106, pp. 894–912. External Links: Document Cited by: §III-B.
  • [19] C. Becchio, L. Sartori, and U. Castiello (2010) Toward you: the social side of actions. Current Directions in Psychological Science. Cited by: §II.
  • [20] Y. Bekiroglu, N. Marturi, M. A. Roa, K. J. M. Adjigble, T. Pardi, C. Grimm, R. Balasubramanian, K. Hang, and R. Stolkin (2019) Benchmarking protocol for grasp planning algorithms. IEEE Robotics and Automation Letters 5 (2), pp. 315–322. Cited by: §V-D.
  • [21] H. Ben Amor, G. Neumann, S. Kamthe, O. Kroemer, and J. Peters (2014) Interaction primitives for human-robot cooperation tasks. In 2014 IEEE International Conference on Robotics and Automation (ICRA), Vol. , pp. 2831–2837. Cited by: §III-E3.
  • [22] A. Bestick, R. Bajcsy, and A. D. Dragan (2016) Implicitly assisting humans to choose good grasps in robot to human handovers. In International Symposium on Experimental Robotics, pp. 341–354. Cited by: §III-B, §V-A, §V-C, §V-D, TABLE I.
  • [23] C. L. Bethel, K. Salomon, R. R. Murphy, and J. L. Burke (2007) Survey of psychophysiology measurements applied to human-robot interaction. In IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Vol. , pp. 732–737. Cited by: §V-B.
  • [24] A. Bicchi and V. Kumar (2000) Robotic grasping and contact: a review. In IEEE International Conference on Robotics and Automation (ICRA), Cited by: §III-B.
  • [25] A. Billard and D. Kragic (2019) Trends and challenges in robot manipulation. Science. External Links: Document Cited by: §I.
  • [26] J. Bohg, K. Hausman, B. Sankaran, O. Brock, D. Kragic, S. Schaal, and G. S. Sukhatme (2017) Interactive perception: leveraging action in perception and perception in action. IEEE Transactions on Robotics 33 (6), pp. 1273–1291. Cited by: §III-B.
  • [27] J. Bohg, A. Morales, T. Asfour, and D. Kragic (2014) Data-driven grasp synthesis—a survey. IEEE Transactions on Robotics 30 (2), pp. 289–309. Cited by: §III-B.
  • [28] J. Bohren, R. B. Rusu, E. Gil Jones, E. Marder-Eppstein, C. Pantofaru, M. Wise, L. Mösenlechner, W. Meeussen, and S. Holzer (2011) Towards autonomous robotic butlers: lessons learned with the pr2. In 2011 IEEE International Conference on Robotics and Automation, Vol. , pp. 5568–5575. Cited by: §I, §V-A, §V-D, TABLE I.
  • [29] A. Borghi (2004-02) Object concepts and action: extracting affordances from objects parts. Acta psychologica 115, pp. 69–96. External Links: Document Cited by: §III-B.
  • [30] J. Boucher, U. Pattacini, A. Lelong, G. Bailly, F. Elisei, S. Fagel, P. Dominey, and J. Ventre-Dominey (2012) I reach faster when i see you look: gaze effects in human–human and human–robot face-to-face cooperation. Frontiers in Neurorobotics 6, pp. 3. Cited by: §III-A.
  • [31] C. Breazeal, A. Brooks, D. Chilongo, J. Gray, G. Hoffman, C. Kidd, H. Lee, J. Lieberman, and A. Lockerd (2004-12) Working collaboratively with humanoid robots. pp. 253 – 272 Vol. 1. External Links: ISBN 0-7803-8863-1, Document Cited by: §VI-A.
  • [32] X. Broquere, D. Sidobre, and I. Herrera-Aguilar (2008) Soft motion trajectory planner for service manipulator robot. In IEEE/RSJ International Conference on Intelligent Robots and Systems, Vol. , pp. 2808–2813. Cited by: §III-E2.
  • [33] J. Burke, K. S. Pratt, R. Murphy, M. Lineberry, M. Taing, and B. Day (2008) Toward developing hri metrics for teams: pilot testing in the field. pp. 21. Cited by: §V.
  • [34] B. Busch, G. Maeda, Y. Mollard, M. Demangeat, and M. Lopes (2017) Postural optimization for an ergonomic human-robot interaction. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol. , pp. 2778–2785. Cited by: §III-D.
  • [35] M. Cakmak, S. S. Srinivasa, M. K. Lee, J. Forlizzi, and S. Kiesler (2011) Human preferences for robot-human hand-over configurations. In IEEE/RSJ International Conference on Intelligent Robots and Systems, Cited by: §III-E1, §V-C, §V-D, TABLE I.
  • [36] M. Cakmak, S. S. Srinivasa, M. K. Lee, S. Kiesler, and J. Forlizzi (2011) Using spatial and temporal contrast for fluent robot-human hand-overs. In ACM/IEEE International Conference on Human-Robot Interaction (HRI), Cited by: §III-A, §V-A, §V-A, §V-C, §V-D, TABLE I.
  • [37] B. Calli, A. Singh, J. Bruce, A. Walsman, K. Konolige, S. Srinivasa, P. Abbeel, and A. M. Dollar (2017) Yale-CMU-Berkeley dataset for robotic manipulation research. The International Journal of Robotics Research 36 (3), pp. 261–268. External Links: Document Cited by: §V-D.
  • [38] B. Calli, A. Singh, A. Walsman, S. Srinivasa, P. Abbeel, and A. M. Dollar (2015-07) The YCB object and model set: towards common benchmarks for manipulation research. In 2015 International Conference on Advanced Robotics (ICAR), pp. 510–517. External Links: Document, ISBN 978-1-4673-7509-2 Cited by: §V-D.
  • [39] B. Calli, A. Walsman, A. Singh, S. Srinivasa, P. Abbeel, and A. M. Dollar (2015) Benchmarking in manipulation research: using the Yale-CMU-Berkeley object and model set. IEEE Robotics & Automation Magazine 22 (3), pp. 36–52. External Links: Document Cited by: §V-D.
  • [40] F. Capozzi, C. Becchio, F. Garbarini, S. Savazzi, and L. Pia (2016) Temporal perception in joint action: this is my action. Consciousness and Cognition. External Links: Document Cited by: §II.
  • [41] C. M. Carpinella, A. B. Wyman, M. A. Perez, and S. J. Stroessner (2017) The robotic social attributes scale (rosas): development and validation. In ACM/IEEE International Conference on Human-Robot Interaction (HRI), Vol. , pp. 254–262. Cited by: §V-C.
  • [42] U. Castiello (2003) Understanding other people’s actions: intention and attention. Journal of experimental psychology. Human perception and performance 29, pp. 416–30. External Links: Document Cited by: §III-A.
  • [43] L. Cavalli, G. Di Pietro, and M. Matteucci (2019) Towards affordance prediction with vision via task oriented grasp quality metrics. arXiv preprint. External Links: 1907.04761 [cs.RO] Cited by: §III-B.
  • [44] T. Chaminade (2008) Social resonance: a theoretical framework and benchmarks to evaluate the social competence of humanoid robots. Proceedings of Metrics for Human-Robot Interaction: A Workshop at the Third ACM/IEEE International Conference on Human-Robot Interaction (HRI). Cited by: §V.
  • [45] W. P. Chan, I. Kumagai, S. Nozawa, Y. Kakiuchi, K. Okada, and M. Inaba (2014) Implementation of a robot-human object handover controller on a compliant underactuated hand using joint position error measurements for grip force and load force estimations. In IEEE International Conference on Robotics and Automation (ICRA), Vol. , pp. 1190–1195. Cited by: §IV-A, §V-D, TABLE I.
  • [46] W. P. Chan, K. Nagahama, H. Yaguchi, Y. Kakiuchi, K. Okada, and M. Inaba (2015) Implementation of a framework for learning handover grasp configurations through observation during human-robot object handovers. In IEEE-RAS International Conference on Humanoid Robots (Humanoids), Cited by: §III-B.
  • [47] W. P. Chan, M. K. X. J. Pan, E. A. Croft, and M. Inaba (2015) Characterization of handover orientations used by humans for efficient robot to human handovers. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Cited by: §III-B.
  • [48] W. P. Chan, Y. Kakiuchi, K. Okada, and M. Inaba (2014) Determining proper grasp configurations for handovers through observation of object movement patterns and inter-object interactions during usage. In IEEE/RSJ International Conference on Intelligent Robots and Systems, Cited by: TABLE I.
  • [49] W. P. Chan, C. A. Parker, H. M. Van der Loos, and E. A. Croft (2013) A human-inspired object handover controller. The International Journal of Robotics Research 32 (8), pp. 971–983. External Links: Document, ISBN 0278-3649, ISSN 0278-3649 Cited by: §IV-A, §V-A, §V-C, §V-D, TABLE I.
  • [50] W. P. Chan, M. K. X. J. Pan, E. A. Croft, and M. Inaba (2019) An affordance and distance minimization based method for computing object orientations for robot human handovers. International Journal of Social Robotics, pp. 1–20. External Links: Document Cited by: §III-B.
  • [51] W. P. Chan, C. A.C. Parker, H.F. M. Van der Loos, and E. A. Croft (2012) Grip forces and load forces in handovers: implications for designing human-robot handover controllers. In ACM/IEEE International Conference on Human-Robot Interaction (HRI), Cited by: §IV-A, §IV-A, §IV.
  • [52] C. Chao and A. L. Thomaz (2010) Turn taking for human-robot interaction. In 2010 AAAI Fall Symposium Series, Cited by: §III-A.
  • [53] A. Chemero (2003) An outline of a theory of affordances. Ecological Psychology 15 (2), pp. 181–195. External Links: Document Cited by: §III-B.
  • [54] Y. S. Choi, T. Chen, A. Jain, C. Anderson, J. D. Glass, and C. C. Kemp (2009) Hand it over or set it down: a user study of object delivery with an assistive mobile manipulator. In IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Cited by: §II, §V-A, §V-C, TABLE I.
  • [55] H. I. Christensen (2016) A roadmap for US robotics from internet to robotics. Note: https://cra.org/ccc/wp-content/uploads/sites/2/2016/11/roadmap3-final-rs-1.pdf(Accessed: May 25, 2020) Cited by: §I.
  • [56] F. Cini, V. Ortenzi, P. Corke, and M. Controzzi (2019) On the choice of grasp type and location when handing over an object. Science Robotics 4 (27). Cited by: §III-B, §III-B, §III-B, §VI-E, §VI-E.
  • [57] H. H. Clark (1996) Using language. ’Using’ Linguistic Books, Cambridge University Press. External Links: Document Cited by: §III-A.
  • [58] R. Cohen and D. Rosenbaum (2004) Where grasps are made reveals how grasps are planned: generation and recall of motor plans. Experimental brain research 157, pp. 486–95. External Links: Document Cited by: §III-B.
  • [59] M. Controzzi, H. Singh, F. Cini, T. Cecchini, A. Wing, and C. Cipriani (2018-12) Humans adjust their grip force when passing an object according to the observed speed of the partner’s reaching out movement. Experimental Brain Research 236 (12), pp. 3363–3377. External Links: Document, ISSN 0014-4819 Cited by: §IV-A, §V-A, §V-C, TABLE I.
  • [60] A. Cosgun, A. J. Trevor, and H. I. Christensen (2015) Did you mean this object?: detecting ambiguity in pointing gesture targets. In ACM/IEEE international conference on Human-Robot Interaction (HRI) workshop on Towards a Framework for Joint Action, Cited by: §III-A.
  • [61] M. R. Cutkosky and J. M. Hyde (1997) Manipulation control with dynamic tactile sensing. In In: Sixth international symposium on robotics research. Hidden Valley, Pennsylvania, Cited by: §II.
  • [62] M. R. Cutkosky (1989) On grasp choice, grasp models, and the design of hands for manufacturing tasks. IEEE Transactions on Robotics and Automation 5 (3), pp. 269–279. External Links: Document, 1106.3747, ISBN 1042-296X VO - 5, ISSN 1042296X Cited by: §III-B, §VI-G.
  • [63] F. Dehais, E. A. Sisbot, R. Alami, and M. Causse (2011) Physiological and subjective evaluation of a human-robot object hand-over task. Applied Ergonomics 42 (6), pp. 785–791. External Links: Document Cited by: §V-B, §V-C, §V-C, §V-D, TABLE I.
  • [64] R. Detry, J. Papon, and L. Matthies (2017)

    Task-oriented grasping with semantic and geometric scene understanding

    .
    In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 3266–3273. External Links: Document, ISBN 978-1-5386-2682-5 Cited by: §III-B.
  • [65] A. Dietrich, C. Ott, and A. Albu-Schäeffer (2015-09) An overview of null space projections for redundant, torque-controlled robots. The International Journal of Robotics Research 34, pp. 1385–1400. External Links: Document Cited by: §III-E3.
  • [66] A. D. Dragan, K. C. T. Lee, and S. S. Srinivasa (2013) Legibility and predictability of robot motion. In 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Vol. , pp. 301–308. Cited by: §III-E1.
  • [67] A. Dragan, R. Holladay, and S. Srinivasa (2014) An analysis of deceptive robot motion. In Robotics: Science and Systems, Cited by: §III-A.
  • [68] A. Edsinger and C. C. Kemp (2007) Human-robot interaction for cooperative manipulation: handing objects to one another. In IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Vol. , pp. 1167–1172. Cited by: §V-A, §V-D, TABLE I, §VI-A.
  • [69] A. G. Eguíluz, I. Rañó, S. A. Coleman, and T. M. McGinnity (2017) Reliable object handover through tactile force sensing and effort control in the shadow robot hand. In 2017 IEEE International Conference on Robotics and Automation (ICRA), Vol. , pp. 372–377. Cited by: §IV-B, §V-A, §V-D, TABLE I.
  • [70] T. Feix, I. M. Bullock, and A. M. Dollar (2014) Analysis of human grasping behavior: object characteristics and grasp type. IEEE Transactions on Haptics 7 (3), pp. 311–323. External Links: Document Cited by: §III-B.
  • [71] T. Feix, I. M. Bullock, and A. M. Dollar (2014) Analysis of human grasping behavior: correlating tasks, objects and grasps. IEEE Transactions on Haptics 7 (4), pp. 430–441. External Links: Document, ISSN 1939-1412 Cited by: §III-B.
  • [72] T. Feix, J. Romero, H. Schmiedmayer, A. M. Dollar, and D. Kragic (2016) The grasp taxonomy of human grasp types. IEEE Transactions on Human-Machine Systems 46 (1), pp. 66–77. External Links: Document Cited by: §III-B, §III-C.
  • [73] F. Ferraguti, C. T. Landi, L. Sabattini, M. Bonfè, C. Fantuzzi, and C. Secchi (2019) A variable admittance control strategy for stable physical human–robot interaction. The International Journal of Robotics Research 38 (6), pp. 747–765. Cited by: §III-E3.
  • [74] F. Ficuciello, A. Romano, L. Villani, and B. Siciliano (2014) Cartesian impedance control of redundant manipulators for human-robot co-manipulation. In 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vol. , pp. 2120–2125. Cited by: §III-E3.
  • [75] F. Ficuciello, L. Villani, and B. Siciliano (2016) Redundancy resolution in human-robot co-manipulation with cartesian impedance control. In Experimental Robotics, pp. 165–176. Cited by: §III-E3.
  • [76] J. Flanagan and R. Johansson (2003) Action plans used in action observation. Nature 424, pp. 769–71. External Links: Document Cited by: §II.
  • [77] T. Flash and N. Hogan (1985) The coordination of arm movements: an experimentally confirmed mathematical model.. The Journal of neuroscience : the official journal of the Society for Neuroscience 5 (7), pp. 1688–703. External Links: Document, ISSN 0270-6474 Cited by: §III-E2.
  • [78] T. Fong, I. Nourbakhsh, and K. Dautenhahn (2003) A survey of socially interactive robots. Robotics and Autonomous Systems. External Links: Document Cited by: §I.
  • [79] M. E. Foster, T. By, M. Rickert, and A. Knoll (2006) Human-robot dialogue for joint construction tasks. In Proceedings of the 8th International Conference on Multimodal Interfaces, ICMI ’06, pp. 68–71. Cited by: §III-A.
  • [80] A. Freedy, E. DeVisser, G. Weltman, and N. Coeyman (2007) Measurement of trust in human-robot collaboration. In International Symposium on Collaborative Technologies and Systems, Cited by: §I.
  • [81] M. Gharbi, P. Paubel, A. Clodic, O. Carreras, R. Alami, and J. Cellier (2015-08) Toward a better understanding of the communication cues involved in a human-robot object transfer. pp. 319–324. External Links: Document Cited by: §III-A.
  • [82] J. J. Gibson (1979-12) The ecological approach to visual perception. Routledge, Hillsdale, NJ.. External Links: Document, ISBN 9781315740218 Cited by: §III-B.
  • [83] M. Giuliani, C. Lenz, T. Müller, M. Rickert, and A. Knoll (2010-09) Design principles for safety in human-robot interaction. International Journal of Social Robotics 2, pp. 253–274. External Links: Document Cited by: §III-E3.
  • [84] S. Gomez-Gonzalez, G. Neumann, B. Schölkopf, and J. Peters (2020) Adaptation and robust learning of probabilistic movement primitives. IEEE Transactions on Robotics 36 (2), pp. 366–379. Cited by: §III-E3.
  • [85] D. A. Gonzalez, B. E. Studenka, C. M. Glazebrook, and J. L. Lyons (2011) Extending end-state comfort effect: do we consider the beginning state comfort of another?. Acta Psychologica 136 (3), pp. 347 – 353. External Links: ISSN 0001-6918, Document Cited by: §III-B, §VI-E.
  • [86] F. Gonzalez, F. Gosselin, and W. Bachta (2014-10) Analysis of hand contact areas and interaction capabilities during manipulation and exploration. IEEE Transactions on Haptics 7, pp. . External Links: Document Cited by: §III-B.
  • [87] E. C. Grigore, K. Eder, A. G. Pipe, C. Melhuish, and U. Leonards (2013) Joint action understanding improves robot-to-human object handover. In IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 4622–4629. Cited by: §III-A, §V-A, §V-A, §V-D, TABLE I.
  • [88] S. Hampali, M. Oberweger, M. Rad, and V. Lepetit (2019)

    HO-3d: a multi-user, multi-object dataset for joint 3d hand-object pose estimation

    .
    arXiv preprint arXiv:1907.01481. Cited by: §III-C.
  • [89] C. Hansen, P. Arambel, K. B. Mansour, V. Perdereau, and F. Marin (2017) Human–human handover tasks and how distance and object mass matter. Perceptual and Motor Skills 124 (1), pp. 182–199. Cited by: §III-D.
  • [90] Y. Hasson, G. Varol, D. Tzionas, I. Kalevatykh, M. J. Black, I. Laptev, and C. Schmid (2019) Learning joint reconstruction of hands and manipulated objects. CoRR abs/1904.05767. External Links: 1904.05767 Cited by: §III-C.
  • [91] K. P. Hawkins, S. Bansal, N. N. Vo, and A. F. Bobick (2014) Anticipating human actions for collaboration in the presence of task and sensor uncertainty. In 2014 IEEE international conference on robotics and automation (ICRA), pp. 2215–2222. Cited by: §III-E3.
  • [92] M. Hjelm, C. H. Ek, R. Detry, and D. Kragic (2015) Learning human priors for task-constrained grasping. In

    International Conference on Computer Vision Systems

    , G. A. Nalpantidis L., Krüger V., Eklundh JO. (Ed.),
    pp. 207–217. External Links: Document Cited by: §III-B.
  • [93] G. Hoffman and C. Breazeal (2007) Effects of anticipatory action on human-robot teamwork: efficiency, fluency, and perception of team. In 2007 2nd ACM/IEEE International Conference on Human-Robot Interaction (HRI), Vol. , pp. 1–8. Cited by: §V-A.
  • [94] G. Hoffman and C. Breazeal (2007) Cost-Based anticipatory action selection for human–robot fluency. IEEE Transactions on Robotics 23 (5), pp. 952–961. External Links: Document Cited by: §V-A.
  • [95] G. Hoffman (2019-06) Evaluating Fluency in Human–Robot Collaboration. IEEE Transactions on Human-Machine Systems 49 (3), pp. 209–218. External Links: Document Cited by: §V-A, §V-C, §V-C, TABLE I, §VI-F.
  • [96] N. Hogan (1985-03) Impedance control - An approach to manipulation. I - Theory. II - Implementation. III - Applications. ASME Transactions Journal of Dynamic Systems and Measurement Control B 107, pp. 1–24. Cited by: §III-E3.
  • [97] H. Holzapfel, R. Mikut, and C. Burghart (2008-09) Steps to creating metrics for humanlike movements and communication skills (of robots). Proc., Workshop Metrics for Human Robot Interaction, 3rd ACM/IEEE Conference on Human Robot Interaction, Amsterdam, pp. 3–12. Cited by: §V.
  • [98] C. Huang, M. Cakmak, and B. Mutlu (2015) Adaptive coordination strategies for human-robot handovers. In Robotics: Science and Systems, Cited by: §III-E3, §V-D, TABLE I, §VI-A.
  • [99] M. Huber, H. Radrich, C. Wendt, M. Rickert, A. Knoll, T. Brandt, and S. Glasauer (2009) Evaluation of a novel biologically inspired trajectory generator in human-robot interaction. In IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Vol. , pp. 639–644. Cited by: §III-E2, §V-C, §V-D, TABLE I.
  • [100] M. Huber, M. Rickert, A. Knoll, T. Brandt, and S. Glasauer (2008) Human-robot interaction in handing-over tasks. In IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Munich, Germany, pp. 107–112. External Links: Document, ISBN 9781424422135, ISSN 1944-9445 Cited by: §III-E2.
  • [101] T. Iberall (1997) Human prehension and dexterous robot hands. The International Journal of Robotics Research 16 (3), pp. 285–299. External Links: Document Cited by: §III-B.
  • [102] A. J. Ijspeert, J. Nakanishi, and S. Schaal (2002) Movement imitation with nonlinear dynamical systems in humanoid robots. In Proceedings 2002 IEEE International Conference on Robotics and Automation (ICRA), Vol. 2, pp. 1398–1403 vol.2. Cited by: §III-C, §III-E3.
  • [103] L. Jamone, E. Ugur, A. Cangelosi, L. Fadiga, A. Bernardino, J. Piater, and J. Santos-Victor (2016) Affordances in psychology, neuroscience, and robotics: a survey. IEEE Transactions on Cognitive and Developmental Systems 10 (1), pp. 4–25. Cited by: §III-B.
  • [104] D. Jirak, M. Menz, G. Buccino, A. Borghi, and F. Binkofski (2010) Grasping language - a short story on embodiment. Consciousness and cognition 19, pp. 711–20. External Links: Document Cited by: footnote 1.
  • [105] R. S. Johansson and K. J. Cole (1992) Sensory-motor coordination during grasping and manipulative actions. Current Opinion in Neurobiology 2 (6), pp. 815 – 823. External Links: ISSN 0959-4388, Document Cited by: §III-B.
  • [106] R. Johansson, G. Westling, A. Bäckström, and J. Flanagan (2001) Eye–hand coordination in object manipulation. The Journal of neuroscience : the official journal of the Society for Neuroscience 21, pp. 6917–32. External Links: Document Cited by: §III-A.
  • [107] N. Kamakura, M. Matsuo, H. Ishii, F. Mitsuboshi, and Y. Miura (1980) Patterns of static prehension in normal hands. American Journal of Occupational Therapy 34 (7), pp. 437–445. External Links: Document Cited by: §III-B.
  • [108] M. Katayama and H. Hasuura (2003) Optimization principle determines human arm postures and ”comfort”. In SICE 2003 Annual Conference (IEEE Cat. No.03TH8734), Vol. 1, pp. 1000–1005. Cited by: §VI-A.
  • [109] J. Kim, J. Park, Y. K. Hwang, and M. Lee (2004) Three handover methods in esteem etiquettes using dual arms and hands of home-service robot. In International Conference on Autonomous Robots and Agents, Cited by: §III-B.
  • [110] G. Knoblich, S. Butterfill, and N. Sebanz (2011) Psychological research on joint action: theory and data. In Psychology of learning and motivation, Vol. 54, pp. 59–101. Cited by: §II.
  • [111] K. Koay, E. Sisbot, D. S. Syrdal, M. Walters, K. Dautenhahn, and R. Alami (2007) Exploratory study of a robot approaching a person in the context of handing over an object.. pp. 18–24. Cited by: §III-E2.
  • [112] A. Koene, S. Endo, A. Remazeilles, M. Prada, and A. M. Wing (2014) Experimental testing of the CogLaboration prototype system for fluent human-robot object handover interactions. In IEEE International Symposium on Robot and Human Interactive Communication, Cited by: §I, §III-E3, §V-A, §V-C, §V-D, TABLE I, §VI-A.
  • [113] A. Koene, A. Remazeilles, M. Prada, A. Garzo, M. Puerto, S. Endo, and A. M. Wing (2014) Relative importance of spatial and temporal precision for user satisfaction in human-robot object handover interactions. In Third International Symposium on New Frontiers in Human-Robot Interaction, Cited by: §III-E3, §V-C, §V-D, TABLE I.
  • [114] M. Kokic, J. A. Stork, J. A. Haustein, and D. Kragic (2017)

    Affordance detection for task-specific grasping using deep learning

    .
    In IEEE-RAS International Conference on Humanoid Robotics (Humanoids), Vol. , pp. 91–98. Cited by: §III-B.
  • [115] M. Kölsch, A. Beall, and M. Turk (2003-10) The postural comfort zone for reaching gestures. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 47, pp. . External Links: Document Cited by: §VI-A.
  • [116] J. Konstantinova, S. Krivic, A. Stilli, J. Piater, and K. Althoefer (2017) Autonomous object handover using wrist tactile information. In Towards Autonomous Robotic Systems, Y. Gao, S. Fallah, Y. Jin, and C. Lekakou (Eds.), pp. 450–463. Cited by: §IV-A, §V-A, §V-D, TABLE I.
  • [117] T. Kruse, A. K. Pandey, R. Alami, and A. Kirsch (2013) Human-aware robot navigation: a survey. Robotics and Autonomous Systems 61 (12), pp. 1726 – 1743. External Links: ISSN 0921-8890 Cited by: §III-E2.
  • [118] D. Kulic and E. Croft (2005) Safe planning for human-robot interaction. J. Field Robotics 22, pp. 383–396. External Links: Document Cited by: §III-E2.
  • [119] D. Kulić and E. Croft (2007-01) Physiological and subjective responses to articulated robot motion. Robotica 25 (1), pp. 13–27. External Links: ISSN 0263-5747, Document Cited by: §V-B, TABLE I.
  • [120] A. G. Kupcsik, D. Hsu, and W. S. Lee (2016) Learning dynamic robot-to-human object handover from human feedback. CoRR abs/1603.06390. External Links: 1603.06390 Cited by: §III-E3, §V-C, §V-D, TABLE I.
  • [121] C. T. Landi, Y. Cheng, F. Ferraguti, M. Bonfè, C. Secchi, and M. Tomizuka (2019) Prediction of human arm target for robot reaching movements. In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol. , pp. 5950–5957. Cited by: §III-C.
  • [122] C. T. Landi, F. Ferraguti, S. Costi, M. Bonfè, and C. Secchi (2019) Safety barrier functions for human-robot interaction with industrial manipulators. In European Control Conference (ECC), Vol. , pp. 2565–2570. Cited by: §III-E2.
  • [123] C. T. Landi, F. Ferraguti, L. Sabattini, C. Secchi, M. Bonfè, and C. Fantuzzi (2017) Variable admittance control preventing undesired oscillating behaviors in physical human-robot interaction. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol. , pp. 3611–3616. Cited by: §III-E3.
  • [124] C. T. Landi, F. Ferraguti, L. Sabattini, C. Secchi, and C. Fantuzzi (2017) Admittance control parameter adaptation for physical human-robot interaction. 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 2911–2916. Cited by: §III-E3.
  • [125] J. Landsmeer (1962-06) Power grip and precision handling. Annals of the rheumatic diseases 2, pp. 164–70. External Links: Document Cited by: §III-B.
  • [126] M. Lang, S. Endo, O. Dunkley, and S. Hirche (2017) Object handover prediction using gaussian processes clustered with trajectory classification. CoRR abs/1707.02745. External Links: 1707.02745 Cited by: §III-C.
  • [127] J. Leitner, A. W. Tow, N. Sünderhauf, J. E. Dean, J. W. Durham, M. Cooper, M. Eich, C. Lehnert, R. Mangels, C. McCool, et al. (2017) The acrv picking benchmark: a robotic shelf picking benchmark to foster reproducible research. In IEEE International Conference on Robotics and Automation (ICRA), pp. 4705–4712. Cited by: §V-D.
  • [128] S. Lemaignan, M. Warnier, A. E. Sisbot, and R. Alami (2014) Human-robot interaction: tackling the ai challenges. Artificial Intelligence. Cited by: §VI-A.
  • [129] S. Levine, P. Pastor, A. Krizhevsky, J. Ibarz, and D. Quillen (2018) Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. The International Journal of Robotics Research 37 (4-5), pp. 421–436. External Links: Document, https://doi.org/10.1177/0278364917710318 Cited by: §VI-E.
  • [130] Z. Li and K. Hauser (2015) Predicting object transfer position and timing in human-robot handover tasks. Cited by: §III-C.
  • [131] H. Lin, J. Smith, K. K. Babarahmati, N. Dehio, and M. Mistry (2018) A projected inverse dynamics approach for multi-arm cartesian impedance control. In IEEE International Conference on Robotics and Automation (ICRA), Cited by: §I.
  • [132] E. Lopez-Damian, D. Sidobre, S. DeLaTour, and R. Alami (2006) Grasp planning for interactive object manipulation. In International Symposium on Robotics and Automation, Cited by: §III-B.
  • [133] T. Lorenz, A. Mörtl, and S. Hirche (2013) Movement synchronization fails during non-adaptive human-robot interaction. In ACM/IEEE International Conference on Human-Robot Interaction (HRI), Cited by: §II.
  • [134] J. Lukos, C. Ansuini, and M. Santello (2007) Choice of contact points during multidigit grasping: effect of predictability of object center of mass location. Journal of Neuroscience 27 (14), pp. 3894–3903. External Links: Document, ISBN 1529-2401 (Electronic)$\$r0270-6474 (Linking), ISSN 0270-6474 Cited by: §III-B.
  • [135] R. Luo, R. Hayne, and D. Berenson (2017-07) Unsupervised early prediction of human reaching for human–robot collaboration in shared workspaces. Autonomous Robots, pp. . External Links: Document Cited by: §III-C.
  • [136] C. L. MacKenzie and T. Iberall (1994) The grasping hand. Cited by: §II.
  • [137] G. Maeda, G. Neumann, M. Ewerton, R. Lioutikov, O. Kroemer, and J. Peters (2016-03) Probabilistic movement primitives for coordination of multiple human–robot collaborative tasks. Autonomous Robots 41, pp. . External Links: Document Cited by: §III-E3, §V-A, §V-A, §V-D, TABLE I.
  • [138] Y. Maeda, T. Hara, and T. Arai (2001) Human-robot cooperative manipulation with motion estimation. In Proceedings 2001 IEEE/RSJ International Conference on Intelligent Robots and Systems. Expanding the Societal Role of Robotics in the the Next Millennium (Cat. No.01CH37180), Vol. 4, pp. 2240–2245 vol.4. Cited by: §III-C.
  • [139] E. Magrini, F. Flacco, and A. De Luca (2014) Estimation of contact forces using a virtual force sensor. In 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vol. , pp. 2126–2133. Cited by: §III-E3.
  • [140] E. Magrini, F. Flacco, and A. De Luca (2015) Control of generalized contact motion and force in physical human-robot interaction. In 2015 IEEE International Conference on Robotics and Automation (ICRA), Vol. , pp. 2298–2304. Cited by: §III-E3.
  • [141] J. Mahler, J. Liang, S. Niyaz, M. Laskey, R. Doan, X. Liu, J. A. Ojea, and K. Goldberg (2017) Dex-net 2.0: deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics. CoRR abs/1703.09312. Cited by: §III-B, §V-D.
  • [142] J. Mahler, R. Platt, A. Rodriguez, M. Ciocarlie, A. Dollar, R. Detry, M. A. Roa, H. Yanco, A. Norton, J. Falco, et al. (2018) Guest editorial open discussion of robot grasping benchmarks, protocols, and metrics. IEEE Transactions on Automation Science and Engineering 15 (4), pp. 1440–1442. Cited by: §V-D.
  • [143] J. Mahler, R. Platt, A. Rodriguez, M. Ciocarlie, A. Dollar, R. Detry, M. A. Roa, H. Yanco, A. Norton, J. Falco, K. van Wyk, E. Messina, J. ’. Leitner, D. Morrison, M. Mason, O. Brock, L. Odhner, A. Kurenkov, M. Matl, and K. Goldberg (2018) Guest editorial open discussion of robot grasping benchmarks, protocols, and metrics. IEEE Transactions on Automation Science and Engineering 15 (4), pp. 1440–1442. External Links: Document Cited by: §III-B.
  • [144] J. Mainprice, M. Gharbi, T. Simeon, and R. Alami (2012) Sharing effort in planning human-robot handover tasks. In 2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication, Paris, France, pp. 764–770. External Links: Document, ISBN 978-1-4673-4606-1 Cited by: §III-D, §V-A, TABLE I.
  • [145] L. Marin, J. Issartel, and T. Chaminade (2009) Interpersonal motor coordination: from human-human to human-robot interactions. Interaction Studies 10, pp. 479–504. External Links: Document Cited by: §II.
  • [146] L. F. Marin-Urias, E. A. Sisbot, and R. Alami (2008) Geometric tools for perspective taking for human–robot interaction. In Mexican International Conference on Artificial Intelligence, Vol. , pp. 243–249. Cited by: §III-E2.
  • [147] N. Marturi, M. Kopicki, A. Rastegarpanah, V. Rajasekaran, M. Adjigble, R. Stolkin, A. Leonardis, and Y. Bekiroglu (2019-06) Dynamic grasp and trajectory planning for moving objects. Autonomous Robots 43 (5), pp. 1241–1256. External Links: ISSN 0929-5593 Cited by: §III-E3.
  • [148] A. H. Mason and C. L. MacKenzie (2005) Grip forces when passing an object to a partner. Experimental Brain Research. External Links: Document, ISSN 0014-4819 Cited by: §II, §II, §IV-A.
  • [149] J. Masumoto and N. Inui (2014) Effects of speech on both complementary and synchronous strategies in joint action. Experimental brain research 232. External Links: Document Cited by: §III-A.
  • [150] J. R. Medina, F. Duvallet, M. Karnam, and A. Billard (2016) A human-inspired controller for fluid human-robot handovers. In 2016 IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids), Vol. , pp. 324–331. Cited by: §III-E3, §III-E, §V-A, §V-D, TABLE I.
  • [151] M. Meyer, R. P. van der Wel, and S. Hunnius (2013) Higher-order action planning for individual and joint object manipulations. Experimental brain research 225 (4), pp. 579–588. Cited by: §II, §III-B, §VI-E.
  • [152] V. Micelli, K. Strabala, and S. S. Srinivasa (2011) Perception and control challenges for effective human-robot handoffs. In In Robotics: Science and systems workshop on rgb-d cameras, Cited by: §III-E3, §V-A, §V-A, §V-C, §V-D, TABLE I.
  • [153] E. Mielke, E. Townsend, D. Wingate, and M. D. Killpack (2020) Human-robot co-manipulation of extended objects: data-driven models and control from analysis of human-human dyads. arXiv preprint arXiv:2001.00991. Cited by: §I.
  • [154] L. Montesano, M. Lopes, A. Bernardino, and J. Santos-Victor (2008) Learning object affordances: from sensory-motor coordination to imitation. IEEE Transactions on Robotics 24 (1), pp. 15–26. External Links: Document Cited by: §III-B.
  • [155] A. Moon, D. M. Troniak, B. Gleeson, M. K.X.J. Pan, M. Zeng, B. A. Blumer, K. MacLean, and E. A. Croft (2014) Meet me where i’m gazing: how shared attention gaze affects human-robot handover timing. In ACM/IEEE international conference on Human-robot interaction (HRI), pp. 334–341. Cited by: §III-A, §V-A, §V-C, §V-D, TABLE I.
  • [156] D. Morrison, P. Corke, and J. Leitner (2019) Learning robust, real-time, reactive robotic grasping. The International Journal of Robotics Research (IJRR). External Links: Document Cited by: §VI-E.
  • [157] D. Morrison, P. Corke, and J. Leitner (2020) EGAD! an evolved grasping analysis dataset for diversity and reproducibility in robotic manipulation. arXiv preprint arXiv:2003.01314. Cited by: §V-D.
  • [158] R. R. Murphy and D. Schreckenghost (2013) Survey of metrics for human-robot interaction. In ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 197–198. Cited by: §V.
  • [159] B. Mutlu (2009) Designing gaze behavior for humanlike robots. Ph.D. Thesis. Cited by: §III-A.
  • [160] J. R. Napier (1956) The prehensile movements of the human hand. The Journal of Bone and Joint Surgery. British volume 38 (4), pp. 902–913. External Links: Document Cited by: §III-B.
  • [161] F. Negrello, W. Friedl, G. Grioli, M. Garabini, O. Brock, A. Bicchi, M. A. Roa, and M. G. Catalano (2020) Benchmarking hand and grasp resilience to dynamic loads. IEEE Robotics and Automation Letters 5 (2), pp. 1780–1787. Cited by: §V-D.
  • [162] H. Nemlekar, D. Dutia, and Z. Li (2019) Object transfer point estimation for fluent human-robot handovers. In 2019 International Conference on Robotics and Automation (ICRA), pp. 2627–2633. Cited by: §III-D.
  • [163] P. Neranon (2018) Robot-to-human object handover using a behavioural control strategy. In 2018 IEEE 5th International Conference on Smart Instrumentation, Measurement and Application (ICSIMA), Vol. , pp. 1–6. Cited by: §IV-B.
  • [164] P. Neranon (2019) Human-to-robot object handover using a behavioural position-based force control approach. In 2019 First International Symposium on Instrumentation, Control, Artificial Intelligence, and Robotics (ICA-SYMP), Vol. , pp. 5–8. Cited by: §IV-B.
  • [165] R. Newbury, K. He, A. Cosgun, and T. Drummond (2020) Learning to place objects onto flat surfaces in human-preferred orientations. arXiv preprint arXiv:2004.00249. Cited by: §VI-D.
  • [166] A. Nguyen, D. Kanoulas, D. G. Caldwell, and N. G. Tsagarakis (2016)

    Detecting object affordances with convolutional neural networks

    .
    In 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol. , pp. 2765–2770. Cited by: §III-B.
  • [167] C. W. Nielsen, D. J. Bruemmer, D. A. Few, and D. I. Gertman (2008) Framing and evaluating human-robot interactions. In proceedings of the Workshop on Metrics for Human-Robot Interaction, pp. 29–36. Cited by: §V.
  • [168] D. Norman (1988) The Psychology of Everyday Things. Basic Books. Cited by: §III-B.
  • [169] D. R. Olsen and M. A. Goodrich (2003) Metrics for evaluating human-robot interactions. In Proceedings of PERMIS, Vol. 2003, pp. 4. Cited by: §V-A.
  • [170] V. Ortenzi, M. Controzzi, F. Cini, J. Leitner, M. Bianchi, M. A. Roa, and P. Corke (2019) Robotic manipulation and the role of the task in the metric of success. Nature Machine Intelligence 1 (8), pp. 340–346. External Links: Document Cited by: §III-B, §VI-E.
  • [171] V. Ortenzi, N. Marturi, V. Rajasekaran, M. Adjigble, and R. Stolkin (2019) Singularity-robust inverse kinematics solver for tele-manipulation. In 2019 IEEE 15th International Conference on Automation Science and Engineering (CASE), Vol. , pp. 1821–1828. Cited by: §III-E1.
  • [172] V. Ortenzi, R. Stolkin, J. Kuo, and M. Mistry (2017) Hybrid motion/force control: a review. Advanced Robotics 31 (19-20), pp. 1102–1113. Cited by: §III-E3.
  • [173] F. Osiurak, Y. Rossetti, and A. Badets (2017) What is an affordance? 40 years later. Neuroscience & Biobehavioral Reviews 77, pp. 403–417. External Links: Document Cited by: §III-B.
  • [174] E. H. Østergaard (2017) The role of cobots in industry 4.0. Universal Robots. Note: https://info.universal-robots.com/hubfs/Enablers/Whitepapers/Theroleofcobotsinindustry.pdf(Accessed: May 25, 2020) Cited by: §I.
  • [175] C. Ott, R. Mukherjee, and Y. Nakamura (2015) A hybrid system framework for unified impedance and admittance control. Journal of Intelligent and Robotic Systems 78, pp. 359–375. Cited by: §III-E3.
  • [176] M. K. X. J. Pan, E. A. Croft, and G. Niemeyer (2018) Exploration of geometry and forces occurring within human-to-robot handovers. In 2018 IEEE Haptics Symposium (HAPTICS), Vol. , pp. 327–333. Cited by: §III-A, §IV-A, §V-A, §V-C, §V-D, TABLE I.
  • [177] M. K. Pan, V. Skjervøy, W. P. Chan, M. Inaba, and E. A. Croft (2017) Automated detection of handovers using kinematic features. The International Journal of Robotics Research 36 (5-7), pp. 721–738. Cited by: §III-A.
  • [178] A. K. Pandey and R. Alami (2014) Towards human-level semantics understanding of human-centered object manipulation tasks for HRI: reasoning about effect, ability, effort and perspective taking. International Journal of Social Robotics. Cited by: §I, §VI-E.
  • [179] S. Parastegari, B. Abbasi, E. Noohi, and M. Zefran (2017) Modeling human reaching phase in human-human object handover with application in robot-human handover. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, pp. 3597–3602. External Links: Document, ISBN 978-1-5386-2682-5 Cited by: §III-D.
  • [180] S. Parastegari, E. Noohi, B. Abbasi, and M. Zefran (2016) A fail-safe object handover controller. In 2016 IEEE International Conference on Robotics and Automation (ICRA), External Links: Document, ISBN 978-1-4673-8026-3 Cited by: §IV-B, §V-A, §V-C, §V-D, TABLE I.
  • [181] S. Parastegari, E. Noohi, B. Abbasi, and M. Zefran (2018-06) Failure recovery in robot–human object handover. IEEE Transactions on Robotics 34 (3), pp. 660–673. External Links: Document, ISSN 1552-3098 Cited by: §IV-B.
  • [182] L. Peternel, W. Kim, J. Babič, and A. Ajoudani (2017) Towards ergonomic control of human-robot co-manipulation and handover. In 2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids), Vol. , pp. 55–60. Cited by: §III-D, §V-B, TABLE I.
  • [183] L. Peternel, C. Fang, N. Tsagarakis, and A. Ajoudani (2019-08) A selective muscle fatigue management approach to ergonomic human-robot co-manipulation. Robotics and Computer-Integrated Manufacturing 58, pp. 69–79. External Links: Document Cited by: §III-D, §VI-A.
  • [184] G. Pezzulo, F. Donnarumma, and H. Dindo (2013) Human sensorimotor communication: a theory of signaling in online social interactions. PloS one. External Links: Document Cited by: §III-A, §III-A.
  • [185] P. Pina, M. Cummings, J. Crandall, and M. Della Penna (2008) Identifying generalizable metric classes to evaluate human-robot teams. In Workshop on Metrics for Human-Robot Interaction, 3rd Ann. Conf. Human-Robot Interaction, pp. 13–20. Cited by: §V.
  • [186] M. Prada, A. Remazeilles, A. Koene, and S. Endo (2014-09) Implementation and experimental validation of dynamic movement primitives for object handover*. In IEEE International Conference on Intelligent Robots and Systems, pp. . External Links: Document Cited by: §III-E3, §V-A, §V-C, §V-D, TABLE I.
  • [187] S. Puhlmann, F. Heinemann, O. Brock, and M. Maertens A compact representation of human single-object grasping. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Cited by: §III-B.
  • [188] L. Punnett and D. Wegman (2004-03) Work-related musculoskeletal disorders: the epidemiologic evidence and the debate. Journal of electromyography and kinesiology : official journal of the International Society of Electrophysiological Kinesiology 14, pp. 13–23. External Links: Document Cited by: §III-D.
  • [189] A. H. Quispe, H. Ben Amor, and M. Stilman (2014) Handover planning for every occasion. In IEEE-RAS International Conference on Humanoid Robots, Vol. , pp. 431–436. Cited by: §III-E3.
  • [190] J. Randall Flanagan, M. C. Bowman, R. S. Johansson, and J. Randall (2006) Control strategies in object manipulation tasks. Current Opinion in Neurobiology 16, pp. 650–659. External Links: Document Cited by: §III-B.
  • [191] M. Ray and T. N. Welsh (2011) Response selection during a joint action task. Journal of Motor Behavior 43 (4), pp. 329–332. Cited by: §II.
  • [192] M. Ray and T. N. Welsh (2018) Multiple frames of reference are used during the selection and planning of a sequential joint action. Frontiers in Psychology. Cited by: §II.
  • [193] M. Richardson, K. Marsh, R. Isenhower, J. Goodman, and R. Schmidt (2008) Rocking together: dynamics of intentional and unintentional interpersonal coordination. Human movement science 26, pp. 867–91. External Links: Document Cited by: §II.
  • [194] G. Rizzolatti and L. Craighero (2004)

    The mirror-neuron system

    .
    Annual review of neuroscience 27, pp. 169–92. External Links: Document Cited by: §II.
  • [195] M. A. Roa and R. Suarez (2009) Computation of independent contact regions for grasping 3-d objects. IEEE Transactions on Robotics 25 (4), pp. 839–850. Cited by: §VI-E.
  • [196] D. A. Rosenbaum, K. M. Chapman, M. Weigelt, D. J. Weiss, and R. van der Wel (2012) Cognition, action, and object manipulation. Psychological Bulletin 138 (5), pp. 924 – 946. External Links: Document Cited by: §III-B, §VI-E.
  • [197] P. Rosenberger, A. Cosgun, R. Newbury, J. Kwan, V. Ortenzi, P. Corke, and M. Grafinger (2020) Object-independent human-to-robot handovers using real time robotic vision. arXiv preprint arXiv:2006.01797. Cited by: §III-C, §V-A, §V-D, TABLE I, §VI-D, §VI-G.
  • [198] L. M. Sacheli, E. Arcangeli, and E. Paulesu (2018) Evidence for a dyadic motor plan in joint action. Scientific Reports. External Links: ISSN 2045-2322 Cited by: §II, §II.
  • [199] A. D. Santis, B. Siciliano, A. D. Luca, and A. Bicchi (2008) An atlas of physical human–robot interaction. Mechanism and Machine Theory 43 (3), pp. 253 – 270. External Links: ISSN 0094-114X, Document Cited by: §III-E2.
  • [200] L. Sartori, E. Straulino, and U. Castiello (2011) How objects are grasped: the interplay between affordances and end-goals. PloS one 6, pp. e25203. External Links: Document Cited by: §III-B.
  • [201] S. Schaal, J. Peters, J. Nakanishi, and A. Ijspeert (2005) Learning movement primitives. In Robotics research. the eleventh international symposium, pp. 561–572. Cited by: §III-C, §III-E3.
  • [202] R. C. Schmidt (2007) Scaffolds for social meaning. Ecological Psychology 19 (2), pp. 137–151. External Links: Document Cited by: §III-B.
  • [203] N. Sebanz, H. Bekkering, and G. Knoblich (2006) Joint action: bodies and minds moving together. Trends in Cognitive Sciences. Cited by: §II.
  • [204] N. Sebanz and G. Knoblich (2009) Prediction in joint action: what, when, and where. Topics in Cognitive Science 1, pp. 353 – 367. External Links: Document Cited by: §II.
  • [205] C. Shi, M. Shiomi, C. Smith, T. Kanda, and H. Ishiguro (2013) A model of distributional handing interaction for a mobile robot. In Robotics: Science and Systems, Cited by: §I, §V-A, §V-D, TABLE I.
  • [206] E. A. Sisbot, L. F. Marin, and R. Alami (2007) Spatial reasoning for human robot interaction. In 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vol. , pp. 2281–2287. Cited by: §III-E2.
  • [207] E. A. Sisbot, L. F. Marin-Urias, X. Broquere, D. Sidobre, and R. Alami (2010) Synthesizing robot motions adapted to human presence. International Journal of Social Robotics 2 (3), pp. 329–343. Cited by: §III-E2, §V-D, TABLE I.
  • [208] H. O. Song, M. Fritz, D. Goehring, and T. Darrell (2016) Learning to detect visual grasp affordance. IEEE Transactions on Automation Science and Engineering 13 (2), pp. 798–809. Cited by: §III-B.
  • [209] M. Staudte and M. Crocker (2008) The utility of gaze in spoken human-robot interaction. In Proceedings of Metrics for Human-Robot Interaction: A Workshop at the Third ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 53–59. Cited by: §III-A.
  • [210] A. Steinfeld, T. Fong, D. Kaber, M. Lewis, J. Scholtz, A. Schultz, and M. Goodrich (2006) Common metrics for human-robot interaction. In ACM SIGCHI/SIGART Conference on Human-Robot Interaction, pp. 33–40. Cited by: §V.
  • [211] K. Strabala, M. K. Lee, A. Dragan, J. Forlizzi, and S. S. Srinivasa (2012) Learning the communication of intent prior to physical collaboration. In IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), External Links: ISBN 9781467346054, ISSN 1944-9445 Cited by: §III-A.
  • [212] K. Strabala, M. K. Lee, Dragan Anca, J. Forlizzi, and S. S. Srinivasa (2013) Towards seamless human-robot handovers. Journal of Human-Robot Interaction 2 (1), pp. 112–132. External Links: Document Cited by: §III-A, §V-A, §V-A, §V-C.
  • [213] Strategic research agenda for robotics in europe 2014-2020. Note: https://www.eu-robotics.net/sparc/about/roadmap/index.html(Accessed: May 25, 2020) Cited by: §I.
  • [214] H. B. Suay and E. A. Sisbot (2015) A position generation algorithm utilizing a biomechanical model for robot-human object handover. In 2015 IEEE International Conference on Robotics and Automation (ICRA), Vol. , pp. 3776–3781. Cited by: §III-D, §V-B, §V-D, TABLE I.
  • [215] M. Tavakoli, J. Carriere, and A. Torabi (2020) Robotics, smart wearable technologies, and autonomous intelligent systems for healthcare during the COVID-19 pandemic: an analysis of the state of the art and future vision. Advanced Intelligent Systems. Cited by: §I.
  • [216] S. Thill, D. Caligiore, A. M. Borghi, T. Ziemke, and G. Baldassarre (2013) Theories and computational models of affordance and mirror systems: an integrative review. Neuroscience and Biobehavioral Reviews 37 (3), pp. 491 – 521. External Links: ISSN 0149-7634, Document Cited by: §III-B.
  • [217] A. Thomaz, G. Hoffman, and M. Cakmak (2016) Computational human-robot interaction. Foundations and Trends in Robotics. Cited by: §I.
  • [218] E. Tomeo, P. Cesari, S. Aglioti, and C. Urgesi (2012) Fooling the kickers but not the goalkeepers: behavioral and neurophysiological correlates of fake action detection in soccer. Cerebral cortex (New York, N.Y. : 1991) 23. External Links: Document Cited by: §III-A.
  • [219] Unesco science report. Note: https://en.unesco.org/news/japan-pushing-ahead-society-50-overcome-chronic-social-challenges(Accessed: May 21, 2020) Cited by: §I.
  • [220] N. Vahrenkamp, H. Arnst, M. Wächter, D. Schiebener, P. Sotiropoulos, M. Kowalik, and T. Asfour (2016) Workspace analysis for planning human-robot interaction tasks. In 2016 IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids), Vol. , pp. 1298–1303. Cited by: §III-D, TABLE I.
  • [221] T. van Oosterhout and A. Visser (2008) A visual method for robot proxemics measurements. In Proceedings of Metrics for Human-Robot Interaction: A Workshop at the Third ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 61–68. Cited by: §V.
  • [222] C. Vesper, S. Butterfill, G. Knoblich, and N. Sebanz (2010) A minimal architecture for joint action. Neural networks : the official journal of the International Neural Network Society. Cited by: §II, §III-A.
  • [223] V. Villani, L. Sabattini, C. Secchi, and C. Fantuzzi (2018) A framework for affect-based natural human-robot interaction. In IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Vol. , pp. 1038–1044. Cited by: §V-B.
  • [224] J. Waldhart, M. Gharbi, and R. Alami (2015) Planning handovers involving humans and robots in constrained environment. In 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol. , pp. 6473–6478. Cited by: §III-D.
  • [225] M. Walters, K. Dautenhahn, S. Woods, and K. Koay (2007) Robotic etiquette: results from user studies involving a fetch and carry task. pp. 317–324. External Links: Document Cited by: §III-E2.
  • [226] D. Widmann and Y. Karayiannidis (2018) Human motion prediction in human-robot handovers based on dynamic movement primitives. In 2018 European Control Conference (ECC), Vol. , pp. 2781–2787. Cited by: §III-C.
  • [227] R. Willems and P. Hagoort (2007) Neural evidence for the interplay between language, gesture, and action: a review. Brain and language 101, pp. 278–89. External Links: Document Cited by: footnote 1.
  • [228] R. Withagen, H. De Poel, D. Araujo, and G. Pepping (2012) Affordances can invite behavior: reconsidering the relationship between affordances and agency. New Ideas in Psychology 30, pp. 250–258. External Links: Document Cited by: §III-B.
  • [229] D. M. Wolpert and Z. Ghahramani (2000) Computational principles of movement neuroscience. Nature Neuroscience. External Links: Document, NIHMS150003, ISBN 10976256, ISSN 1097-6256 Cited by: §II.
  • [230] K. Yamane, M. Revfi, and T. Asfour (2013) Synthesizing object receiving motions of humanoid robots with human motion database. In 2013 IEEE International Conference on Robotics and Automation, Vol. , pp. 1629–1636. Cited by: §III-E, §V-D, TABLE I.
  • [231] G. Yang, B. J. Nelson, R. R. Murphy, H. Choset, H. Christensen, S. H. Collins, P. Dario, K. Goldberg, K. Ikuta, N. Jacobstein, D. Kragic, R. H. Taylor, and M. McNutt (2020) Combating COVID-19-the role of robotics in managing public health and infectious diseases. Science Robotics. Cited by: §I.
  • [232] W. Yang, C. Paxton, M. Cakmak, and D. Fox (2020) Human grasp classification for reactive human-to-robot handovers. arXiv preprint arXiv:2003.06000. Cited by: §III-C, §V-A, §V-A, §V-C, §V-D, TABLE I, §VI-D.
  • [233] Ye Gu, A. Thobbi, and W. Sheng (2011)

    Human-robot collaborative manipulation through imitation and reinforcement learning

    .
    In 2011 IEEE International Conference on Information and Automation, Cited by: §I.
  • [234] A. M. Zanchettin, N. M. Ceriani, P. Rocco, H. Ding, and B. Matthias (2016) Safety in human-robot collaborative manufacturing environments: metrics and control. IEEE Transactions on Automation Science and Engineering 13 (2), pp. 882–893. Cited by: §III-E2.
  • [235] C. Zimmermann, D. Ceylan, J. Yang, B. Russell, M. Argus, and T. Brox (2019) FreiHAND: a dataset for markerless capture of hand pose and shape from single rgb images. In IEEE International Conference on Computer Vision (ICCV), Cited by: §III-C.