On the manipulation of articulated objects in human-robot cooperation scenarios

01/05/2018
by   Alessio Capitanelli, et al.
Università di Genova
0

Articulated and flexible objects constitute a challenge for robot manipulation tasks but are present in different real-world settings, including home and industrial environments. Current approaches to the manipulation of articulated and flexible objects employ ad hoc strategies to sequence and perform actions on them depending on a number of physical or geometrical characteristics related to those objects, as well as on an a priori classification of target object configurations. In this paper, we propose an action planning and execution framework, which (i) considers abstract representations of articulated or flexible objects, (ii) integrates action planning to reason upon such configurations and to sequence an appropriate set of actions with the aim of obtaining a target configuration provided as a goal, and (iii) is able to cooperate with humans to collaboratively carry out the plan. On the one hand, we show that a trade-off exists between the way articulated or flexible objects are perceived and how the system represents them. Such a trade-off greatly impacts on the complexity of the planning process. On the other hand, we demonstrate the system's capabilities in allowing humans to interrupt robot action execution, and - in general - to contribute to the whole manipulation process. Results related to planning performance are discussed, and examples of a Baxter dual-arm manipulator performing actions collaboratively with humans are shown.

READ FULL TEXT VIEW PDF

Authors

page 3

page 12

page 37

page 38

page 39

04/25/2019

Dual-Arm In-Hand Manipulation and Regrasping Using Dexterous Manipulation Graphs

This work focuses on the problem of in-hand manipulation and regrasping ...
11/13/2020

Collaborative Robotic Manipulation: A Use Case of Articulated Objects in Three-dimensions with Gravity

This paper addresses two intertwined needs for collaborative robots oper...
06/24/2013

Synthesizing Manipulation Sequences for Under-Specified Tasks using Unrolled Markov Random Fields

Many tasks in human environments require performing a sequence of naviga...
10/02/2020

Manipulation of Articulated Objects using Dual-arm Robots via Answer Set Programming

The manipulation of articulated objects is of primary importance in Robo...
10/01/2021

Improving Object Permanence using Agent Actions and Reasoning

Object permanence in psychology means knowing that objects still exist e...
07/20/2021

Ontology-Assisted Generalisation of Robot Action Execution Knowledge

When an autonomous robot learns how to execute actions, it is of interes...
10/26/2020

POMDP Manipulation Planning under Object Composition Uncertainty

Manipulating unknown objects in a cluttered environment is difficult bec...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The introduction of the Industry 4.0 paradigm is expected to redefine the nature of shop-floor environments along many directions, including the role played by robots in the manufacturing process Krugeretal2009 ; Heyer2010 . One of the main tenets considered in Industry 4.0 is the increased customer satisfaction via a high degree of product personalization and just-in-time delivery. On the one hand, a higher level of flexibility in manufacturing processes is needed to cope with such diversified demands, especially in low-automation tasks. On the other hand, skilful robots working alongside humans can be regarded as a valuable aid to shop-floor operators, who can supervise robots’ work and intervene when needed Tsarouchietal2016 , whereas robots can be tasked with difficult or otherwise stressful operations.

Human-robot cooperation (HRC) processes in shop-floor environments are a specific form of human-robot interaction (HRI) with at least two important specificities. The first is related to the fact that the cooperation is targeted towards a well-defined objective (e.g., an assemblage, a unit test, a cable harnessing operation), which must be typically achieved in a short amount of time. The second has to do with the fact that humans need to feel (at least partially) in control Baragliaetal2016 ; Munzeretal2017 : although grounded in the cooperation process, their behaviours could be unpredictable in specific cases, with obvious concerns about their safety Deneietal2015 ; HaddadinCroft2016 ; they may not fully understand robot goals Chakrabortietal2017 ; robot actions may not be considered appropriate for the peculiar cooperation objectives GoodrichSchultz2007 ; Munzeretal2017 .

As far as the cooperation process is concerned, two high-level directives must be taken into account:

  1. cooperation models (and robot action planning techniques) enforcing the prescribed objectives must be adopted JohannsmeierHaddadin2017 ; Darvishetal2017 ;

  2. the robot must be flexible enough to adapt to human operator actions avoiding a purely reactive approach Dautenhahn2007 ; Prewettetal2010 , and to make its intentions clear ClairMataric2015 ; Ronconeetal2017 .

These two directives lead to three functional requirements for a HRC architecture. The robot must be able to:

  1. (at least implicitly) recognize the effects of human operator actions LiuWang2017 ;

  2. adapt its behaviour on the basis of two elements: human operator actions themselves and the whole cooperation objectives;

  3. employ planning techniques allowing for a fast action re-planning when needed, e.g., when planned actions cannot be executed for sudden changes in the environment or inaccurate modelling assumptions Srivastavaetal2014 .

Figure 1: Two examples of a cable harnessing operation.

Among the various tasks typically carried out in the shop-floor, the manipulation of flexible or articulated objects, e.g., cable harnessing operations, is particularly challenging HenrichWorn2000 ; SaadatNan2002 ; Smithetal2012 ; Jimenez2012 , as can be seen in Figure 1: on the one hand, it is usually beneficial to accurately plan the expected cable configurations on the harnessing table in advance, thus confirming requirement ; on the other hand, it is often necessary to keep a cable firm using more than two grasping points and to re-route the wiring pattern, which – when done collaboratively with a robot, for instance to place bundle retainers or junction fixtures – leads to requirements and above.

In the literature, the problem of determining the 2D or 3D configuration of flexible or articulated objects has received much attention in the past few years Wakamatsuetal2006 ; Nairetal2017 , whereas the problem of obtaining a target configuration via manipulation has been explored in motion planning Yamakawaetal2013 ; Bodenhagen2014 ; Schulmanetal2016 . However, in the context of HRC, perception and manipulation are only part of the challenges to address. Conceptually speaking, the outcome of such approaches is a continuous mapping in 2D or 3D space from an initial to a target object’s configuration Milleretal2011 ; Berenson2013 ; Bodenhagen2014 ; Ahmadzadehetal2015 , subject to a number of simplifying hypotheses as far as object models are concerned HowardBekey1997 ; Mastrogiovannietal2004 ; Franketal2010 ; Elbrechter2011 ; Elbrechter2012 . This observation leads to two further functional requirements. The robot must be able to:

  1. represent object configurations adopting suitable modelling assumptions, and then segment the whole manipulation problem in simpler actions to appropriately sequencing and monitoring, each action operating in-between two intermediate configurations;

  2. represent the actions to perform using a formalism allowing for plan executions that are robust with respect to unexpected events (e.g., the human operator suddenly intervenes), and modelling errors (e.g., not modelled objects to be removed from the workspace).

In this paper, we consider articulated objects as suitable models for flexible objects Yamakawaetal2013 , and we address the following challenges: (i) we provide two representation and planning models for the classification of articulated object configurations and the sequencing of manipulation actions, using an OWL-DL ontology-based formalism and the Planning Domain Definition Language (PDDL) FoxLong2003 , and we test them using two state-of-the-art PDDL planners, namely Probe LipovetzkyGeffner2011 and Madagascar Rintanen2014 , as well as with the VAL plan validator Foxetal2005 ; (ii) we embed such models in a reactive/deliberative architecture for HRC, referred to as planHRC, which takes human operator behaviours into account and is implemented on top of the ROSPlan Cashmoreetal2015 and MoveIt! moveit frameworks; and (iii) we discuss how perception assumptions and representation schemes impact on planning and execution in HRC scenarios. The planHRC architecture has been deployed on a dual-arm Baxter manipulator, which is used in all of our experiments.

The paper is organised as follows. Section 2 discusses relevant approaches for the work described here. Section 2.2 introduces more formally the problem we address, as well as the scenario we consider. The planHRC’s architecture is described in detail in Section 3, where the overall information flow, the representation and reasoning challenges, and the planning models are discussed. Experiments to validate the architecture are described in Section 4. Conclusions follow.

2 Background

2.1 Planning Techniques in Human-Robot Cooperation

A number of studies have been conducted to investigate the role and the acceptability of automated planning techniques in HRC scenarios. As highlighted in a field study by Gombolay and colleagues, two factors are important to maximise human satisfaction in HRC Gombolayetal2013 : on the one hand, humans must be allowed to choose their own tasks freely, i.e., without them being assigned by an algorithm, subject to the fact that the cooperation is successful; on the other hand, the overall system’s (i.e., the human-robot team’s) performance must be at high standards. It is noteworthy that these two factors may conflict in case of a lazy or not focused human attitude. However, when required to trade-off between them, humans show a strong preference for system’s performance over their own freedom. This study well fits with the requirements , and outlined above, and opens up to an idea of a collaborative robot as a device not only able to aid human workers, but also capable of keeping them in focus and steering the cooperation towards its objectives if deviations occur.

As a follow-up of the work discussed in Gombolayetal2013 , a study about the actual amount of control a human worker would like to have when collaborating with a robot has been reported in Gombolayetal2014 . The main finding of this study is that human workers tend not to prefer a total control of the cooperation process, rather they opt for partial control. This is confirmed by the fact that the overall team’s performance seems higher when the robot determines what actions must be carried out by the human. As a consequence, a key factor for the acceptance of collaborative robots is finding a sensible – yet efficient – trade-off between performance and human control.

In order to determine such trade-off, which may depend on the peculiar emotional or physical status of the human worker, one possibility is to encode in the planning process her/his preferences as far as tasks and operations are concerned Gombolayetal2015 . In a first series of experiments, the use of human preferences in the planning algorithm led to an overall decrease in performance, correlated with human subjective perceptions of robots not in line with the main cooperation objectives. This suggests that a subjective assessment of the HRC process tends to attribute major inefficiencies to robots, and confirms that this is a crucial aspect for the applicability of collaborative robots in industrial scenarios.

Techniques for HRC available in the literature target these issues only to a partial extent, positioning themselves at different levels in the trade-off scale outline above. It is possible to identify two relevant trends for our work.

Approaches in the first class aim at defining cooperation models, i.e., data structures modelling the task to be jointly carried out, while keeping flexibility and human preferences into account Cirilloetal2010 ; Wilcoxetal2012 ; Tsarouchietal2016 ; JohannsmeierHaddadin2017 ; Ronconeetal2017 ; Sebastianietal2017 ; Darvishetal2017 .

A probabilistic planner is used in Cirilloetal2010 to sequence already defined partial plans, which include preferences about human preferred actions. Once determined, the sequence of partial plans cannot be changed, therefore no flexibility for the human is allowed. Such a limitation is partially overcome by the approach described in Wilcoxetal2012 , where an algorithm to adapt on-line both the action sequence and a number of action parameters is described. This is achieved using a temporal formulation making use of preferences among actions, and using optimization techniques to identify the sequence best coping with preferences and constraints. Still, the algorithm weighs more plan optimality (in terms of a reduced number of actions, or the time to complete the plan), and uses human preferences as soft constraints. The approach by Tsarouchi and colleagues Tsarouchietal2016 assumes that a human and a robot co-worker operate in different workspaces. The focus is on allocating tasks to the human or the robot depending on their preferences, suitability and availability, and the cooperation model is represented using an AND/OR graph. Although human preferences are taken into account, task allocation is a priori fixed and cannot be changed at run-time. A similar approach is considered in JohannsmeierHaddadin2017 , where the assumption about the separate workspaces is relaxed. Hierarchical Task Models (HTMs) are used in Ronconeetal2017

, where the robot is given control on task allocation and execution is modelled using Partially Observable Markov Decision Processes (POMDPs). However, the focus of this approach is on robot communication actions to enhance

trust in the human counterpart and to share a mutual understanding about the cooperation objectives. A similar approach is adopted in Sebastianietal2017 , where HTMs are substituted by Hierarchical Agent-based Task Planners (HATPs) and POMDPs are replaced by Petri Network Plans (PNPs). However, differently from the approach in Ronconeetal2017 , the work by Sebastiani and colleagues support on-line changes during plan execution. Finally, the work by Darvish and colleagues represents cooperation models using AND/OR graphs, and allows for a switch among different cooperation sequences at runtime Darvishetal2017 , therefore allowing humans to redefine the sequence of tasks among a predefined set of choices. The human does not have to explicitly signal the switch to the robot, whereas the robot adapts to the new cooperation context reactively.

The second class includes techniques focused on understanding, anticipating or learning human behaviours on-line Agostinietal2011 ; Koppulaetal2013 ; Karpasetal2015 ; LiuFisac2016 ; KwonSuh2014 ; CaccavaleFinzi2017 .

The work by Agostini and colleagues adopts classical planning approaches to determine an appropriate sequence of actions, given a model of the cooperation defined as a domain and a specific problem to solve Agostinietal2011 . At runtime, the system ranks a predefined series of cause-effect events, e.g., observing their frequency as outcomes of human activities, and updates the cooperation model accordingly. Markov Decision Processes (MDPs) are used in Koppulaetal2013 to model the cooperation. In particular, the human and the robot are part of a Markov decision game, and must cooperatively conclude the game, i.e., carrying out the cooperation process. Human actions are detected on-line, which influences robot’s behaviour at run-time. A similar approach, which takes into account temporal constraints among tasks, is discussed in Karpasetal2015 . Statistical techniques to recognize human actions and to adapt an already available plan accordingly are presented in LiuFisac2016 . Human deviations from the plan are detected. When this happens, re-planning (including task allocation) occurs to achieve the cooperation objectives. While the approaches discussed so far are quite conservative as far as robot’s autonomy in the cooperation process is concerned, the work discussed in KwonSuh2014

exploits Bayesian networks to predict the occurrence and the timing of human actions. Such a prediction is used to perform preparatory actions before an event even occurs. While the overall system’s performance is greatly improved, humans tend to be confused by the seemingly anticipative robot’s behaviour. Hierarchical Task Networks (HTNs) are used in

CaccavaleFinzi2017 to embed communication actions in the cooperation process. When specific deviations from the plan are detected, such communication actions enforce the adherence to the plan.

2.2 Rationale, Assumptions, Problem Statement, Reference Scenario

planHRC takes inspiration from the findings in Gombolayetal2013 ; Gombolayetal2014 ; Gombolayetal2015 to devise a cooperation model with the following characteristics:

  • similarly to the work in Agostinietal2011 , the robot plans an appropriate, optimal, sequence of actions to determine relevant intermediate configurations for an articulated object (considered as a simplified model for a flexible object like a cable), in order to determine a final target configuration, therefore coping with requirement ;

  • during plan execution, the robot always monitors the outcome of each action, and compares it with the target configuration to achieve, therefore limiting the burden on the human side Gombolayetal2014 ;

  • normally, the human can supervise robot actions: when a robot action is not successful, or a plan cannot be found, the human can intervene on the robot’s behalf performing her/his preferred action sequence Gombolayetal2015 , therefore meeting and ;

  • at any time, the human can intervene (e.g., performing an action the robot was tasked with, or changing the articulated object’s configuration), and the robot adapts to the new situation, in accordance with Gombolayetal2015 and requirements and .

More specifically, the problem we consider in this paper is three-fold: (i) given a target object’s configuration, determining a plan as an ordered set of actions:

(1)

where each action involves one or more manipulation operations to be executed by a dual-arm robot, (ii) designing a planning and execution architecture for the manipulation of articulated objects, which is efficient and flexible in terms of perceptual features, their representation and action planning, and (iii) seamlessly integrating human actions in the loop, allowing the robot to adapt to novel, not planned beforehand, object’s configurations online.

In order to provide planHRC with such a features, we pose a number of assumptions described as follows:

  • flexible objects are modelled as articulated objects with a number of links and joints, as it is customary also for computational reasons Yamakawaetal2013 ; we assume an inertial behaviour, i.e., rotating one link causes the movement of all upstream or downstream links, depending on the rotation joint; while this assumptions may not hold for soft flexible objects, it may reasonably hold for a wide range of bendable cables or objects;

  • the effects of gravity on the articulated object’s configurations are not considered, and the object is located on a table during all operations;

  • we do not assume in this paper any specific grasping or manipulation strategies to obtain a target object’s configuration starting from another configuration; however, we do consider when an action cannot be completed because of unexpected events or modelling omissions;

  • perception of articulated objects is affected by noise, but the symbol grounding problem, i.e., the association between perceptual features and the corresponding symbols in the robot’s knowledge representation system Harnad1990 , is assumed to be known.

On the basis of assumption , we focus on articulated objects only. As anticipated above, we need to represent object’s configurations. We define an articulated object as a -ple , where is the ordered set of its links, i.e., , and is the ordered set of its joints, i.e., . Each link is characterized by two parameters, namely a length and an orientation . We allow only for a limited number of possible orientation values. This induces an ordered set of allowed orientation values, i.e., , such that an orientation can assume values in . We observe that in a cable harnessing operation this is only a minor simplification, since operators tend to distribute cables along predefined directions. Given a link , we define two sets, namely and , such that the former is made of upstream links, i.e., from to , whereas the latter includes downstream links from to .

Orientations can be expressed with respect to an absolute, possibly robot-centred reference frame, or – less intuitively – relative to each other, for instance can represent the rotation with respect to . At a first glance, the absolute representation seems preferable because it leads to the direct perception of links and their orientations with respect to a robot-centred reference frame, whereas the set of absolute orientations constitute the overall object’s configuration. When a sequence of manipulation actions are planned, changing one absolute orientation requires – in principle – the propagation of such change upstream or downstream the object via joint connections, which (hypothesis ) is expected to increase the computational burden on the reasoner and () may lead to suboptimal or redundant action sequences, because the propagation may jeopardise the effects of previous actions in the plan, or to sequences which cannot be fully understood by the human. On the contrary, the less intuitive relative approach assumes the direct perception of the relative orientations between pairwise links, and thus the overall object’s configuration is made up of incremental rotations. In this case, () computation is expected to be less demanding, since there is no need to propagate one change in orientation to upstream or downstream links, and therefore () actions on different links tend to be planned sequentially. This has obvious advantages since it leads to shorter plans (on average), which could be further shortened by combining together action sub-sequences (e.g., two subsequent reorientations of consolidated as one single action), and to easy-to-understand plans.

Figure 2: Two possible representations: absolute (top) and relative (bottom).

If the object is represented using absolute orientations (Figure 2 on the top), then its configuration is a -ple:

(2)

where it is intended that the generic element is expressed with respect to an absolute reference frame. Otherwise, if relative angles are used (Figure 2 on the bottom), then the configuration must be augmented with an initial virtual link in order to define a reference frame, and therefore:

(3)

In principle, while the relative representation could model an object’s configuration with one joint less compared to the absolute representation, the resulting configuration would not be unique (indeed there were infinitely many), since the object would maintain pairwise relative orientations between its links even when rotated as a whole. Therefore, an initial virtual reference link is added to the chain.

Figure 3: The experimental scenario: a Baxter dual-arm manipulator operating on an articulated object.

In order to comply with assumption , we set up an experimental scenario where a Baxter dual-arm manipulator operates on articulated objects located on a table in front of it (Figure 3). Rotation operations occur only around axes centred on the object’s joints and perpendicular to the table where the object is located. We have crafted a wooden articulated object made up of long links, connected by joints. Links are thick. The object can be easily manipulated by the Baxter’s standard grippers, which complies with assumption . To this aim, we adopt the MoveIt! framework. The Baxter is equipped with an RGB-D device located on top of its head pointing downward to the table. Only RGB information is used. QR tags are fixed to each object’s link, which is aimed at meeting assumption . Each QR code provides a 6D link pose, which directly maps to an absolute link orientation . Finally, if relative orientations are employed, we compute them by performing an algebraic sum between the two absolute poses of two consequent links, e.g., . A human can supervise robot operations and intervene when necessary from the other side of the table.

3 planHRC’s Architecture

3.1 Information Flow

Figure 4: The information flow in planHRC.

planHRC allows for interleaved sensing, planning, execution and cooperation with a human using a number of parallel loops orchestrating the behaviour of different modules (Figure 4). Assuming that an articulated object is located on the table in front of the robot, we want to modify its current configuration to obtain a goal configuration , which can be expressed as (2) or (3), i.e., it holds that and , or and , respectively.

The goal configuration is encoded as assertional knowledge in an OWL-based Ontology module Krotzsch2013 . When this happens, the Perception module is activated, and the Baxter’s camera acquires an image of the workspace. It is noteworthy that the Perception

module acquires images continuously, but for the sake of simplicity we treat each acquisition as if it were synchronous with action execution. The acquired image is filtered and registered, and appropriate artefact removal procedures are applied to produce an image suitable for feature extraction, which is fed to the

Scene Analysis module. A perceived configuration (i.e., the current configuration ) is extracted from the image, and a representation of it stored in the Ontology module. Both and are represented using conjunctions of class instances, which model such predicates as , to indicate whether two links are connected by a joint, or , to define angle orientations. If and are not compatible then a planning process occurs. In order to determine compatibility, we assume the presence of an operator that, given a terminological or assertional element in the ontology, returns its description in OWL formalism. Therefore, if the description of is not subsumed by the description of , i.e., it does not hold that , the planner is invoked. Specifically, a Planner module is activated, which requires the definition of relevant predicates , and possible action types in the form:

(4)

where is the set of preconditions (in the form of predicates) for the action to be executable, is the set of negative effects, i.e., predicates becoming false after action execution and is the set of positive effects, i.e., predicates becoming true after execution. For certain domains, it is useful to extend (4) to allow for additional positive or negative effects, i.e., predicates becoming true or false in case certain additional conditions hold. A conditional action can be modelled as:

(5)

where , and are defined as before, is the set of additional preconditions, whereas and and are the sets of additional effects subject to the validity of predicates in . Furthermore, the Planner requires a suitable description of the current state (including a description of ) and the goal state (including ), described using an appropriate set of ground predicates . This information, encoded partly in the terminological section and partly in the assertional section of the Ontology module, is translated in a format the Planner module can use, namely the Planning Domain Definition Language (PDDL) PDDL .

A plan, as formally described in (1), is an ordered sequence of actions whose execution changes the current state from to through a set of intermediate states. In a plan, each action may correspond to one or more scripted robot behaviours. For example, rotating a link may require the robot to (i) keep the upstream link steady with its left gripper, and (ii) rotate of a certain amount with the right gripper. Such sequence shall not be encoded in the planning process, thereby reducing planning cost, but demanded to an action execution module. If a plan is found, each action is encoded in the Ontology, along with all the expected intermediate states , which result from actions. The Execution module executes action by action activating the proper modules in the architecture, e.g., such behaviours as motion planning, motion execution, obstacle avoidance or grasping.

Figure 5: The planning and execution pipeline.

Each action in a plan is assumed to transform a state into a state , such that:

(6)

If has additional conditions, then (6) is modified as:

(7)

where conditions and return the sets and , respectively, if the conditions in hold, and otherwise. Before the action is executed, the Ontology module activates Perception to acquire a new image. Again, this induces a new perceived, current configuration . Every time this happens, two situations can happen: if corresponds to a current perceived state whose description is subsumed by the description of a state possibly generated applying an action or as a consequence of human intervention, i.e., , then the execution continues with action until a state is reached which is subsumed by ; otherwise, a new planning process occurs, considering the current state as a new initial state and keeping the previous goal state .

A few remarks can be made. When an action is executed, the expected intermediate state is treated as a set of normative ground predicates, i.e., it defines the normal, expected state for to be feasible. Whether is obtained as a result of a previous action, or with the help of the human operator is not relevant for . On the contrary, deviations from it are treated as violations and therefore the system tries to re-plan in order to reach a state compatible with starting from the current state. As discussed above, violations can be of two kinds: on the one hand, human interventions (i.e., object manipulations on robot’s behalf) may lead to a current state not compatible with the expected intermediate state , and therefore the robot should adapt by re-planning; on the other hand, a robot may not be able to complete action , e.g., due to a cluttered workspace Srivastavaetal2014 or the obstructing presence of the human operator HaddadinCroft2016 . In the second case, if such an event were detected, the robot would re-plan starting from the current state, and possibly ask for the human operator’s help to achieve a workable object’s configuration. As a consequence, planHRC implements a policy according to which the overall system’s performance is ensured by the use of state-of-the-art planning techniques, but it allows at any time the human operator to intervene and forces the robot to adapt its plan accordingly.

Figure 5 shows a graphical model of the information flow from the perspective of the planning process.

3.2 Representation Models in the Ontology

An ontology is a -ple where the is a terminological taxonomy of axioms storing definitions of relevant classes and relationships within a domain, and the is an assertional taxonomy representing factual knowledge for such a domain. Both and are described using the Description Logic formalism Baaderetal2003 through its computational variant referred to as the Web Ontology Language (OWL), and in particular OWL-DL OWL , plus the addition of SWRL rules for deductive reasoning Horrocksetal2005 .

In planHRC, the ontology is used both off-line and on-line for different purposes222The OWL ontology used in this work is available for further evaluation and improvements at: https://github.com/EmaroLab/OWL-ROSPlan/tree/master/rosplan_knowledge_base/.. The off-line use is related to modelling the domain of articulated objects manipulation, in terms of types, predicates, operators, states, problems and plans. The on-line use serves two purposes: on the one hand, to represent relevant object’s configurations, such as the current and the goal configurations, as well as specific actions to perform using classes and relationships defined in the ; on the other hand, to apply such reasoning techniques as instance checking to the representation, e.g., to determine whether an action assumes an expected planning state which is compatible with the perceived current state , as described in Figure 5.

To this aim, the taxonomy in the models types, which are used by the Planner module when processing PDDL syntax, as primitive classes derived from , such as and , as well as (represented using degrees). Relevant predicates are modelled as classes derived from . For instance, is used to relate a to a , as:

(8)

In the description, has two arguments, namely and , which relate exactly one with one . While this perfectly fits with the first and last link composing the articulated object, intermediate links will be modelled using two predicates, for the downstream and upstream links, respectively. In order to specify the orientation associated with a with respect to a :

(9)

where the semantics associated with depends on whether we adopt absolute or relative angles, as described in Section 2.2. As discussed above, in the planning process the value an orientation can take is related to the set of allowed orientation values. This is done to reduce the state space involved in the planning process, and the sensitivity of planHRC’s computational performance with respect to the cardinality of the set is analysed as part of experimental validation. Irrespectively whether orientations are absolute or relative, is represented as a collection of predicates relating pairwise values:

(10)

For instance, if only two possible orientations are allowed, namely and , can be modelled using only one predicate such that:

(11)

where it is intended that the orientations and are associated with and , respectively. Other predicates are described in a similar way. As it will become evident in Section 3.4, the absolute representation of orientations requires the use of conditional operators in PDDL. These are modelled in the using conditional predicates to be mapped to PDDL operators:

(12)

where the intuitive meaning is that for all individuals specified in the relationship, when specific individuals hold, the additional effects must be considered.

It is possible to define as:

(13)

In (13), we do not assume the presence of a relationship to the aim of modelling both actions and conditional actions using the same definition. In our , two actions are defined, namely , which increases a link orientation of a certain amount, and , which does the opposite.

One notable predicate used as part of conditional effects is , which models how changing a link orientation propagates via connected upstream or downstream joints:

(14)

which states that a change in orientation related to the joint in is affected by rotations of joint specified in , as obtained when operating on the link in .

Likewise, any state (perceived, current, predicted or expected) is represented as a set of predicates:

(15)

through the relationship , which must include at least one for the state to be formally expressed. As a consequence, a planning problem is modelled as having an initial and a goal :

(16)

Finally, a is made up of actions:

(17)

On-line, the is updated each time a new image is acquired by the Perception

module, and maintains descriptions in the form of assertions possibly classified as being instances of the classes defined in the

terminology. Let us describe what happens at each iteration with an example. If the robot perceived an object configuration like the one in Figure 2 on the top, four instances:

(18)

and four instances:

(19)

would be necessary to represent it. Then, the object’s structure would be modelled as a description including the set of predicate instances:

(20)

where is such that:

(21)

as specified in (8). Other Predicate instances can be generated in a similar way. Furthermore, assuming that , , and , orientations would be represented as:

(22)

where, focusing on only:

(23)

All such and instances would contribute to the definition of the current state by means of a set of assertions like:

(24)

as foreseen by (15). Similar descriptions for problems and, after the planning process occurs, for plans, can be introduced as well.

When a new goal state is encoded in the ontology, a new is created, such that, according to (16):

(25)

and the Planner module is activated. A translation process occurs, which generates the proper PDDL formulation by querying the (to generate the PDDL domain) and the (to generate the PDDL problem). Each class in the roughly corresponds to a section of the domain, whereas and in the define the initialisation and goal sections of a problem.

After a plan has been found and validated (see Section 3.4), each action is encoded back in the ontology as an instance of , and therefore all relationships , , and are specified in terms of and instances. If an action has conditional effects, also is determined. As a consequence, a set of intermediate expected states is create as:

(26)

as described in Section 3.1. In particular, , , and the intermediate expected states are generated using (6) and (7). When individuals are generated, the Execution module is activated.

3.3 Reasoning in the Ontology, or the Cooperation Model

As anticipated in Section 2.2, and in accordance with the findings in Gombolayetal2013 ; Gombolayetal2014 ; Gombolayetal2015 , the human-robot cooperation model implemented in planHRC foresees that: (i) the robot determines a plan maximizing some performance indicator in terms of number of actions and/or time-to-completion; (ii) the robot monitors the execution of each action in the plan; (iii) during normal work flow, the human supervises robot actions; and (iv) the human can intervene to cope with robot’s failures in action planning or execution, or to perform tasks asynchronously and in parallel to robot activities. The model unfolding is based on monitoring the state transitions in (6) and (7) and their failures. Independently of the presence of conditional effects in an action , two cases are possible after the action is submitted to the Execution module: it cannot be executed (or it is executed only in part) or it is carried out successfully.

The first case originates from motion planning or execution issues, e.g., because of a cluttered workspace Srivastavaetal2014 or to prevent any harm to the human operator Deneietal2015 ; HaddadinCroft2016 ; Darvishetal2017 . If motion issues occur, planHRC does not generate a state compatible with . However, this does not necessarily mean that the current state is still compatible with the previous assessed state , i.e., may not hold, because the robot may have completed only part of the action. In this case, a new current state is acquired. If there is any intermediate expected state comparable with , i.e., there exists in the ontology an individual such that , then execution resumes from action ; otherwise, it is necessary to invoke again the Planner module using and , and obtain a new plan.

In the second case, action is considered to be successful from the point of view of motion execution. Still, the outcome may or may not be compatible with the expected state , e.g., due to not modelled effects. This state is observable as the current state . However, although does not hold, it may happen that could be appropriate for the next action to occur. In particular, for to be executable, it must hold that , i.e., . We treat the set of predicates in as normative conditions for , regardless whether the expected state is generated as the outcome of the previous action . If does not hold, we must check whether there is any intermediate expected state comparable with : if it is the case, execution resumes from action ; otherwise, re-planning is necessary.

In summary, human intervention is strictly necessary when a plan cannot be found. However, any human action is implicitly considered every time the current state does not comply with normative predicates.

3.4 Planning Models

As anticipated in Section 2.2, orientations can be expressed using an absolute or relative reference frame, meaning that in the first case each link’s orientation is expressed with respect to an externally-defined reference frame, whereas in the second case it is expressed with respect to another link’s orientation, e.g., the upstream one. These two possibilities lead to two planning models, with different semantics, which are characterized by different properties as far as (i) obtained plan, (ii) computational load of the planning process, and (iii) ease of execution for the robot, are concerned.

For the sake of simplicity, we present the relative formulation first, and then the absolute one. The relative formulation employs the :STRIPS subset of PDDL, extended with :equalities and :negative-preconditions, whereas the absolute version requires also the use of :conditional-effects. Notably, the problem we are interested in assumes a sort of granularity discretization of angular orientations, hence there is no practical necessity for continuous or hybrid planning models FoxLong2006 . Therefore, PDDL constitutes an appropriate level of abstraction333Examples of planning domains and problems can be found at https://github.com/EmaroLab/paco_actions..

As discussed when introducing assumption , our model assumes a sort of inertial behaviour, i.e., rotating one link affects the orientation of upstream or downstream links as well. In fact, given a link to rotate (clockwise or anticlockwise), two rotation actions are possible: on the one hand, if link is kept still and is rotated (clockwise or anticlockwise), then all links in rotate (clockwise or anticlockwise) and are displaced as well; on the other hand, if link is kept still, all links in are rotated (clockwise or anticlockwise) and displaced.

From a planning perspective, each rotation action (either clockwise or anticlockwise) changing an angle referring to a relative orientation does not affect any other orientations of links in or , since all of them are relative to each other, and therefore the planning process is computationally less demanding. However, since actions are expected to be based on link orientations grounded with respect to a robot-centred reference frame, i.e., absolute in terms of pairwise link orientations, a conversion must be performed, which may be greatly affected by perceptual noise, therefore leading to inaccurate or even inconsistent representations. In the absolute formulation, is considered absolute, and therefore it can be associated directly with robot actions. Unfortunately, this means that each action changing does affect numerically all other orientations of links in or in the representation, which must be kept track of using conditional effects in the planning domain. It is noteworthy that the use of advanced PDDL features, such as conditional effects, may allow for a more accurate representation of the domain but, at the same time, it may reduce the number of planners able to reason on the model, and their efficiency.

(: ac ti onRotateClockwise
:parameters (?l1 ?l2 - Link
?j1 - Joint ?o1 ?o2 - Orientation)
:precondition (and
(Connected ?j1 ?l1)
(Connected ?j1 ?l2)
(not (= ?l1 ?l2))
(HasOrientation ?o1 ?j1)
(OrientationOrd ?o1 ?o2))
:effect (and
(not (HasOrientation ?o1 ?j1))
(HasOrientation ?o2 ?j1))
)
Figure 6: The relative version of in PDDL.

Relative formulation. As described in Section 2.2, an articulated object is represented using two ordered sets of links and joints. We use a Connected predicate modelled as described in (8) to describe the sequence of links in terms of binary relationships each one involving a link and a joint , which induces a pairwise connection between two links, namely and , since they share the same joint . The orientation of a link is associated with the corresponding joint and corresponds to an angle , which ranges between and , using the predicate HasOrientation as specified in (9). As anticipated above, this formulation assumes that link orientations are expressed incrementally relative to each other. This means that the robot’s perception system is expected to provide the Ontology module with the set of relative link orientations as primitive information. If absolute link orientations are not available, the object’s configuration can be computed applying forward kinematics formulas using relative orientations and link lengths. If noise affects the perception of link orientations, as it typically does, the reconstruction of the object’s configuration may differ from the real one, and this worsens with link lengths. However, this model significantly simplifies the planning model’s complexity: from a planner’s perspective, the modification of any link orientations does not impact on other relative joint angles, and therefore rotation actions can be sequenced in any order the planner deems fit.

Angles are specified using constants, and are ordered using the predicate OrientationOrd as described by (10). The difference between constant values is the granularity of the resolution associated with modelled orientations. For example, if 30 and 45 are used as constants representing, respectively, a and a angle, then a predicate (OrientationOrd 30 45) is used to encode the fact that 30 precedes 45 in the orientation granularity, and corresponds to the description in (11).

Independently of what part of the articulated object is rotated, the domain model includes two actions, namely RotateClockwise (Figure 6) and RotateAntiClockwise. Intuitively, the former can be used to increase the orientation of a given link of a certain granularity step (e.g., from to ), whereas the latter is used to decrease the orientation, by operating on two connected links. In the definition of RotateClockwise, ?l1 and ?l2 represent any two links and , ?j1 is the joint connecting them, whereas ?o1 and ?o2 are the current and the obtained link orientations, respectively. If ?j1 connected two different links ?l1 and ?l2, the angle ?o1 of ?l1 associated with ?j1 would be increased of a certain step (depending on the next orientation value) therefore leading to ?o2. A similar description can be provided for RotateAntiClockwise.

A problem is defined by specifying the initial and final states. The former includes the topology of the articulated object in terms of Connected predicates, and its initial configuration using HasOrientation predicates; the latter describes its goal configuration using relevant HasOrientation predicates.

(: ac ti on Ro tateClockwise
:parameters (?l1 ?l2 - Link
?j1 - Joint ?o1 ?o2 - Orientation)
:precondition (and
(Connected ?j1 ?l1)
(Connected ?j1 ?l2)
(not (= ?l1 ?l2))
(HasOrientation ?o1 ?j1)
(OrientationOrd ?o1 ?o2))
:effect
(and
(not (HasOrientation ?o1 ?j1))
(OrientationOrd ?o2 ?j1)
(forall (?j2 - Joint ?o3 ?o4 - Orientation)
(when (and
(Affected ?j2 ?l1 ?j1)
(not (= ?j2 ?j1))
(HasOrientation ?o3 ?j2)
(OrientationOrd ?o3 ?o4))
(and
(not (HasOrientation ?o3 ?j2))
(HasOrientation ?o4 ?j2)))
)
)
Figure 7: The conditional version of RotateClockwise in PDDL.

Absolute formulation. The absolute formulation differs from the relative one in that link orientations are expressed with respect to a unique, typically robot-centred, reference frame. Therefore, the set of link orientations is assumed to be directly observable by the robot perception system. However, if a rotation action modifies a given link orientation , all orientations of links in or must be consistently updated as well, i.e., it is necessary to propagate such change upstream or downstream. As a consequence, such a representation increases the complexity of the planning task but it is more robust to errors: in fact, perceiving independent link orientations induces an upper bound on the error associated with their inner angle. The Connected, HasOrientation and OrientationOrd predicates are the same as in the relative formulation, subject to the different semantics associated with link orientations. Also in the absolute formulation two actions are used, namely RotateClockwise (Figure 7) and RotateAntiClockwise. However, with respect to the relative formulation, the effects of the action differ. In particular, the model assumes that we can represent which joints are affected when a link is rotated around one of the corresponding joints. This is done using the Affected predicate, i.e., a ternary predicate (Affected ?j2 ?l1 ?j1), where ?l1 is the rotated link, ?j1 is the joint around which ?l1 rotates, and ?j2 is a joint affected by this rotation. Therefore, if ?j2 were affected, the angle of the corresponding link would be modified as well in the conditional statement and, as such, it would affect other joints via the corresponding links. For each couple ?l1, ?j1, the list of joints affected by the corresponding movement should be provided under the form of multiple Affected predicates. With reference to the action described in Figure 7, as in the previous case, the joint ?j1, located between ?l1 and ?l2, is increased by a quantity defined by a specific granularity, according to the OrientationOrd predicate. If rotating ?l2 around ?j1 affects ?j2, the latter is updated, as well as all other joints upstream or downstream. This is encoded by the forall part of the PDDL encoding. Following the semantics of the language, the forall statement requires the planning engine to update the state of all joints ?j2 that are affected by the performed action – checked conditions are specified via the when statement. The HasOrientation predicate of identified affected joints is then updated accordingly. A similar definition for RotateAntiClockwise can be easily given.

In terms of problem definition, beside Connected and HasOrientation predicates, it is necessary to include the list of appropriately defined Affected predicates.

It is noteworthy that the two action definitions, namely RotateClockwise and RotateAntiClockwise, are functionally equivalent. Furthermore, any problem we target here could be solved – in principle – with just one action, as long as discretized angles were ring-connected. We decided to introduce two different actions for two reasons: on the one hand, it is rare that joints can rotate freely for or more; on the other hand, this model leads to shorter plans (on average) in terms of number of actions and cleaner, more natural executions, at the expense of a slightly longer planning time.

4 Experimental Validation and Discussion

4.1 System Implementation

planHRC has been implemented using both off-the-shelf components and ad hoc solutions. All experiments have been carried out using a dual-arm Baxter manipulator. The Perception and Scene Analysis modules are custom nodes developed using the Robot Operating System (ROS) framework. They employ the Alvar tracker library to read QR codes444Webpage: http://wiki.ros.org/ar_track_alvar. Images are collected using the standard RGB camera of a Kinect device, which is mounted on the Baxter’s head and points downward to capture what happens on a table in front of the robot. Ontology and Planning have been implemented on top of ROSPlan Cashmoreetal2015 , which has been extended using the ARMOR framework for ontology management. Two planners have been used, namely Probe LipovetzkyGeffner2011 and Madagascar Rintanen2014 . The two planners have been selected on the basis of their performance in the agile track of the 2014 International Planning Competition, as well as following a computational assessment of their performance with respect to other planners with similar features555The interested reader can found relevant information in Capitanelli2017 .. The Execution module and the various activated behaviours have been implemented using MoveIt!.

On-line, the architecture runs on a 8 Intel Core i7-4790 CPU GHz processors workstation, with 8 GB of RAM, running a Linux Ubuntu 14.04.5 LTS operating system. Off-line performance tests about the planning process have been carried out on a workstation equipped with 2.5 GHz Intel Core 2 Quad processor, GB of RAM, running a Linux kernel operating system.

Problem formulations, as well as all generated instances, including domain, problems and plans, are freely available 666Webpage: https://github.com/EmaroLab/paco_synthetic_test.

4.2 Planning Performance

Tests with synthetic problem instances have been performed to stress the two planning formulations. For the tests, we varied the number of links from to and the number of allowed orientations a link can take from (i.e., with a resolution of ) to (i.e., with a resolution of ). As outlined above, such a resolution has a different meaning depending on whether we employ the absolute or relative formulations.

Figure 8:

Means and variances of solution times for different problem instances using the absolute formulation and Probe: on the x-axis, the first value indicates the number of links, the second the number of allowed orientations. Runtime is reported in

.
Figure 9: Means and variances of solution times for different problem instances using the absolute formulation and Madagascar: on the x-axis, the first value indicates the number of links, the second the number of allowed orientations. Runtime is reported in .
Figure 10: Means and variances of solution times for different problem instances using the relative formulation and Probe: on the x-axis, the first value indicates the number of links, the second the number of allowed orientations. Runtime is reported in .
Figure 11: Means and variances of solution times for different problem instances using the relative formulation and Madagascar: on the x-axis, the first value indicates the number of links, the second the number of allowed orientations. Runtime is reported in .

Figures 8 to 11 show box plots representing means and variances, in , for different problem instances, for all the combinations of formulation and planner. Problem instances are labelled as , where defines the number of links and specifies the orientation resolution. For each instance, planners have been executed

times to take into account the randomness associated with the employed heuristics. A

upper bound to the solution time has been set. If a planner is unable to find a solution before such time limit is reached, it is stopped. Figures only contain data related to problems solved within the time limit.

As it can be seen in Figure 8, when we use the absolute formulation and Probe, of the instances are solved, i.e., out of . It is possible to observe that problem instances with up to and are solved in roughly less than , with a relatively small variance. When the number of links increase, planning time significantly increases as well, and thus the variance. In the same situation, as depicted in Figure 9, Madagascar shows a more unpredictable behaviour: for small problem instances, it can quickly find a solution, and with a small temporal variance; however, the employed heuristics may cause large variances in specific cases, e.g., the instance labelled . It is worthy to note that larger instances are rarely solved and, in general, the number of solved instances is lower when compared to Probe, i.e., only ( out of ). As it will be also showed in the next Section, these results seem to confirm hypothesis , i.e., the more intuitive absolute formulation leads to more complex reasoning processes. This is due to the fact that planners need to propagate the effects of each action to upstream or downstream links, which can be done only by employing a complex formulation involving conditional effects.

If we consider the relative formulation, approach, then both Probe (Figure 10) and Madagascar (Figure 11) are very efficient, with Madagascar outperforming Probe to a small extent. Both planners are capable of solving all the instances ( out of ) in less that , and exhibit a very good scalability, as well as a very limited variance. These results support hypothesis , i.e., the reduced planning effort is reflected by the simpler formulation.

Figure 12: A sequence of configurations for a problem, using the absolute formulation with Probe (first two rows) and Madagascar (second two rows).
Figure 13: A sequence of configurations for a problem, using the relative formulation with Probe (first row) and Madagascar (second row).

4.3 Examples

In this Section, we provide examples of plans generated by Probe and Madagascar using the two formulations introduced above. Furthermore, we show and discuss what happens in a number of human-robot cooperation use cases.

In order to discuss how the different planners deal with the absolute and the relative formulation, we focus the discussion on a specific instance with links and joints. Figure 12

shows two possible solutions, obtained respectively using Probe and Madagascar, when the absolute formulation is adopted. The figure shows sequences of configurations assumed by the object, from left to right and from top to bottom. It can be observed that plans are characterized by a number of seemingly unnecessary actions, since the planners must continuously maintain the representation consistency. The plan obtained using Madagascar (on the bottom) also loops over two configurations, which is probably due to the employed heuristics. This example seems to confirm

, i.e., the absolute approach leads to suboptimal plans, or plans which may not easily understood by human co-workers.

Figure 13 shows how Probe and Madagascar solve the same problem when a relative formulation is adopted. Both planners generate solutions that are shorter than those obtained using the the absolute formulation, and no seemingly unnecessary actions are planned. In the plan generated by Madagascar, it is possible to observe that actions involving the same link tend to be performed sequentially, i.e., seems to be verified. This holds for other solutions as well.

Figure 14: A sequence of configurations for a problem, from top to bottom and left to right, as seen from the robot’s perspective.
Figure 15: The sequence of Figure 14 as executed by Baxter without human intervention.

As anticipated above, planHRC has been deployed on a dual-arm Baxter manipulator to enable the robot to autonomously manipulate articulated objects. The Baxter operates on a -link articulated object, assuming that the angle resolution is , i.e., a problem according to the definition introduced above. Figure 14 shows a sequence of configurations, including the initial and the goal ones, from top to bottom and left to right, whereas Figure 15 shows the corresponding relevant instants during the execution of the plan by the robot. It is worth noting that, each time a or action is executed, the actual robot behaviour is made up of three steps: the first is to firmly grasp the link associated with the interested joint that must be kept still, the second is to grasp the link that must be rotated, the third is the actual rotation of the proper amount. In planHRC, this can be done indifferently by the left or right robot arms, according to a simple heuristics related to which arm is closer to the link to operate on. Grasping actions in Figure 14 are indicated with grasping signs close to the interested link, plus an sign to indicate that the action is performed with the right arm, or otherwise. We decided not to model grasping actions at the planning level for two reasons: on the one hand, they would have increased the burden of the planning process; on the other hand, each rotation must be preceded by a grasping operation, and therefore this sequence can be easily serialized in the execution phase.

Figure 16: A series of manipulation actions executed with the help of a human operator.
Figure 17: Another series of manipulation actions executed with the help of a human operator.

Figure 16 and Figure 17 show two examples of plans where human intervention occurs to successfully accomplish the whole cooperation process. In the figures, the two sequences must be analysed from top to bottom and left to right.

In Figure 16, it is possible to see that the human operator performs an action while the robot is executing a rotation action on other links (snapshots and ). The action performed by the human operator leads to a situation compatible with the object’s target configuration. As a consequence, the final configuration is reached in snapshot . Afterwards, the operator modifies again the status of the first link (snapshot ), thereby leading to a configuration not compatible with the goal one. As a consequence, the robot intervenes to restore it (snapshots and ). This sequence demonstrates two important features of planHRC: first, the freedom human operators have in performing actions asynchronously with robot actions; second, the robot capabilities in keeping the cooperation on track coping with possible human mistakes.

Figure 17 shows an example where a human operator helps the robot complete an action, which was not performed in its entirety. The robot starts executing a plan (snapshots to ). However, a rotation action is not completed, leading the object’s configuration to a state not compatible with the expected one (snapshot ). Then, the human operator intervenes with an action aimed at completing the intended rotation and, at the same time, performing an additional rotation on the last link in the chain (snapshot

). From that moment on, the robot autonomously completes the plan. This sequence shows how a plan can be successfully recovered by human intervention, and the fact that the robot can seamlessly continue plan execution.

4.4 Discussion

On the basis of the requirements outlined above and the experimental analysis carried out to evaluate the whole planHRC architecture, it is possible to discuss a few interesting arguments, draw some conclusions, and indicate promising research directions. In particular, the discussion that follows is focused on three aspects, namely planning speed, the generation of natural sequences of manipulation actions and the resulting cooperation process when interacting with the robot.

Planning speed. The absolute and the relative formulations are characterized by different performance results.

When using the absolute formulation, both Probe and Madagascar are capable of solving problem instances with a limited number of links and orientations in less than , which is a reasonable upper bound for the reasoning time of a collaborative robot interacting with a human co-worker, with Probe outperforming Madagascar on bigger problem instances. With around links, the time required to obtain a plan (if it exists) exponentially increases with the number of possible orientations, with solution times up to an average of and beyond. When using Probe, solution times for the same problem instance have a certain variance, which is almost uniform for different numbers of links and possible orientations. If Madagascar is used, such variance generally decreases, but sometimes it may become significantly large, as shown for example in the problem instance . As far as human-robot cooperation processes are concerned, if an absolute formalization were used, then Probe would represent the best trade-off between complexity and solution times. In principle, Madagascar would be a better choice for problems with a reduced number of links and possible orientations, but the occasional presence of large variances in solution times would seriously jeopardize the human-robot cooperation process.

The two planners behave differently when using a relative formulation. Both Probe and Madagascar prove capable of solving large problem instances (i.e., with up to links and up to possible orientations) in less than . Solution times are exponential also in this case, but the very low time scale makes such trend relevant only to a limited extent. Differently from the case with the absolute formulation, Probe behaves quite deterministically, and the same holds for Madagascar. When dealing with human-robot cooperation, both planners are suitable to be used if a relative formulation is adopted, with a slight preference for Probe.

The relative formulation proves to be essential when the robot must deal with the directive discussed in the Introduction, and in particular to allow for a fast action re-planning when needed, as required by .

Natural action sequences. In general, the two formulations lead to qualitatively different plans, i.e., plans with different actions.

Independently of the employed planner, the absolute formulation originates plans, which are longer than those obtained using the relative formulation. In the first case, the solution may contain apparently unnecessary actions, as well as repeated sequences of actions. This is due to the fact that when working on orientations of links located downstream in the chain, such orientations may be later modified as a side-effect when the algorithm operates on links upstream, therefore requiring reworking on downstream links. Such plans are the result of certain planner heuristics. However, they are often unnatural for humans to understand, which is of the utmost importance in human-robot cooperation processes.

Plans obtained starting from the relative formulation are shorter and – in a generic sense – more understandable by humans. Since the representation of orientations is relative for pairwise links, the planner does not need to modify orientations of downstream links multiple times, and solutions tend to include sequences of actions operating on the same link. This makes plans easy to follow, irrespectively whether they are generated using Probe or Madagascar.

Thus, as far as naturalness is concerned, the relative formulation seems to be preferred over the absolute formulation. Shorter and easy-to-understand plans are supposed to strengthen a co-worker’s ability to supervise robot actions in compliance with directive and to intervene when required, as prescribed by requirement . However, it is noteworthy that planHRC has not been tested with human subjects yet. As a consequence, there are still to-be-validated hypotheses requiring us to conduct a specifically designed study, also related to the role of context-aware planning in human-robot cooperation Mastrogiovannietal2013 .

The cooperation process. On the one hand, in absence of errors related to action execution, once a plan is available planHRC should be able to carry it out in its entirety. This is in agreement with directive discussed in the Introduction. On the other hand, when either one action is not executed successfully or it has been carried out only partially, a human co-worker can intervene to obtain an object configuration that the robot can operate upon. As a whole, these two facts support requirement .

As described above, before any action is executed, the robot checks whether a number of expected normative predicates hold in the current planning state. Implicitly, this means that any error in action execution or human intervention is synchronously assessed before the next planned action can start. Obviously enough, this represents a limiting factor for planHRC, and originates from the focus on planning states rather than actions. A more flexible reactive system may make use of human actions to determine causes of faults on the fly, instead of being limited in assessing their outcomes at discrete intervals. However, it also enforces the fact that humans are in control at any time: the robot simply waits for human intervention to finish and then plans a course of action from that moment on.

Also in this case, such an approach should be validated with human subjects. Current work is devoted to investigate these matters.

5 Conclusions

The paper introduces a hybrid reactive/deliberative architecture for collaborative robots in industrial scenarios.

We focus on the collaborative manipulation of flexible or articulated objects. We consider articulated objects as suitable models for flexible objects, and a number of challenges are considered: (i) the representation of articulated objects using a standard OWL-DL formalism, and the definition of suitable planning models using PDDL; (ii) an implicit addressing of human operator’s actions; and and (iii) an assessment of how perception assumptions and the related representation schemes impact on planning and execution.

The developed architecture is evaluated on the basis of a number of functional and non functional requirements: the possibility for the system to implicitly recognize the effects of human actions, the robot’s capabilities in adapting to those actions, a fast (re-)planning process when needed, just to name the most important ones.

Current work is focused on two aspects: on the one hand, a more detailed, computationally efficient, representation of articulated objects and the corresponding planning models; on the other hand, a systematic evaluation of the architecture with human volunteers.

References

References

  • (1) J. Krüger, T. Lien, A. Verl, Cooperation of humans and machines in the assembly lines, CIRP Annals - Manufacturing Technology 58 (2) (2009) 628–646.
  • (2) C. Heyer, Human-robot interaction and future industrial robotics applications, in: Proceedings of the 2010 IEEE-RSJ International Conference on Intelligent Robots and Systems (IROS 2010), Taipei, Taiwan, 2010.
  • (3) P. Tsarouchi, A. Matthaiakis, S. Makris, G. Chryssolouris, Human-robot interaction review and challenges on task planning and programming, International Journal of Computer Integrated Manufacturing 29 (8) (2016) 916–931.
  • (4) J. Baraglia, M. Cakmak, Y. Nagai, R. Rao, M. Asada, Initiative in robot assistance during collaborative task execution, in: Proceedings of the 2016 ACM/IEEE International Conference on Human-Robot Interaction (HRI 2016), Christchurch, New Zealand, 2016.
  • (5) T. Munzer, Y. Mollard, M. Lopes, Impact of robot initiative on human-robot collaboration, in: Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction (HRI 2017), Vienna, Austria, 2017.
  • (6) S. Denei, F. Mastrogiovanni, G. Cannata, Towards the creation of tactile maps for robots and their use in robot contact motion control, Robotics and Autonomous Systems 63 (3) (2015) 293–308.
  • (7) S. Haddadin, E. Croft, Physical human-robot interaction, in: B. Siciliano and O. Khatib (Eds.) Springer Handbook of Robotics, Springer International Publishing, 2016.
  • (8) T. Chakraborti, S. Kambhampati, M. Scheutz, Y. Zhang, AI challenges in human-robot cognitive teaming, arXiv preprint arXiv:1707.04775.
  • (9) M. Goodrich, A. Schultz, Human-robot interaction: a survey, Foundations and Trends in Human-Computer Interaction 1 (3) (2007) 203–275.
  • (10) L. Johannsmeier, S. Haddadin, A hierarchical human-robot interaction-planning framework for task allocation in collaborative industrial assembly processes, IEEE Robotics and Automation Letters 2 (1) (2017) 21–48.
  • (11) K. Darvish, F. Wanderlingh, B. Bruno, E. Simetti, F. Mastrogiovanni, G. Casalino, Flexible human-robot cooperation models for assisted shop-floor tasks, arXiv preprint arXiv:1707.02591.
  • (12) K. Dautenhahn, Socially intelligent robots: dimensions of human-robot interaction, Philosophical Transactions of the Royal Society of London, Series B, Biological Sciences 362 (1480) (2007) 679–704.
  • (13) M. Prewett, R. Johnson, K. Saboe, L. Elliott, M. Coovert, Managing workload in human-robot interaction: a review of empirical studies, Computers in Human Behaviour 26 (5) (2010) 840–856.
  • (14) A. Clair, M. Matarić, How robot verbal feedback can improve team performance in human-robot as collaborations, in: Proceedings of the 2015 ACM/IEEE International Conference on Human-Robot Interaction (HRI 2015), Portland, USA, 2015.
  • (15) A. Roncone, O. Mangin, B. Scassellati, Transparent role assignment and task allocation in human robot collaboration, in: Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA 2017), Singapore, 2017.
  • (16) H. Liu, L. Wang, Gesture recognition for human-robot collaboration: a review, International Journal of Industrial Ergonomics - in press.
  • (17) S. Srivastava, E. Fang, L. Riano, R. Chitnis, S. Russell, P. Abbeel, Combined task and motion planning through an extensible planner-independent interface layer, in: Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA 2014), Hong Kong, China, 2014.
  • (18) D. Henrich, H. Worn, Robot manipulation of deformable objects, Advanced Manufacturing, Springer-Verlag, London, Berlin, Heidelberg, 2000.
  • (19) M. Saadat, P. Nan, Industrial applications of automatic manipulation of flexible materials, Industrial Robot: an International Journal 29 (5) (2002) 434–442.
  • (20) C. Smith, Y. Karayiannidis, L. Nalpantidis, X. Gratal, P. Qi, D. Dimarogonas, D. Kragic, Dual arm manipulation: a survey, Robotics and Autonomous Systems 60 (10) (2012) 1340–1353.
  • (21) P. Jimenez, Survey on model-based manipulation planning of deformable objects, Robotics and Computer-Integrated Manufacturing 28 (2) (2012) 154–163.
  • (22) H. Wakamatsu, E. Arai, S. Hirai, Knotting and unknotting manipulation of deformable linear objects, International Journal of Robotic Research 25 (4) (2006) 371–395.
  • (23)

    A. Nair, D. Chen, P. Agraval, P. Isola, P. Abbeel, J. Malik, S. Levine, Combining self-supervised learning and imitation for vision-based rope manipulation, in: Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA 2017), Singapore, 2017.

  • (24) Y. Yamakawa, A. Namiki, M. Ishikawa, Dynamic high-speed knotting of a rope by a manipulator, International Journal of Advanced Robotic Systems 10 (2013) 1–12.
  • (25) L. Bodenhagen, A. Fugl, A. Jordt, M. Willatzen, K. Andersen, M. Olsen, R. Koch, H. Petersen, N. Kruger, An adaptable robot vision system performing manipulation actions with flexible objects, IEEE Transactions on Automation Science and Engineering 11 (3) (2014) 749–765.
  • (26) J. Schulman, J. Ho, C. Lee, P. Abbeel, Learning from demonstrations through the use of non-rigid registration, in: M. Inaba and P. Corke (Eds.) Robotics Research, Vol. 114 of Springer Tracts in Advanced Robotics, Springer International Publishing, Lausanne, Switzerland, 2016.
  • (27) S. Miller, J. van den Berg, M. Fritz, T. Darrell, K. Goldberg, P. Abbeel, A geometric approach to robotic laundry folding, International Journal of Robotic Research 31 (2) (2011) 249–267.
  • (28) D. Berenson, Manipulation of deformable objects without modelling and simulating deformation, in: Proceedings of the 2013 IEEE-RSJ International Conference on Intelligent Robots and Systems (IROS 2013), Tokyo, Japan, 2013.
  • (29) S. Ahmadzadeh, A. Paikan, F. Mastrogiovanni, L. Natale, P. Kormushev, D. Caldwell, Learning symbolic representations of actions from human demonstrations, in: Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA 2015), Seattle, WA, USA, 2015.
  • (30) A. Howard, G. Bekey, Recursive learning for deformable object manipulation, in: Proceedings of the 1997 International Conference on Advanced Robotics (ICAR 1997), Monterey, CA, USA, 1997.
  • (31) F. Mastrogiovanni, A. Sgorbissa, R. Zaccaria, A system for hierarchical planning in service mobile robotics, in: Proceedings of the 8th Conference on Intelligent Autonomous Systems (IAS-8), Amsterdam, The Netherlands, 2004.
  • (32) B. Frank, R. Schmedding, C. Stachniss, M. Teschner, W. Burgard, Learning the elasticity parameters of deformable objects with a manipulation robot, in: Proceedings of the 2010 IEEE-RSJ International Conference on Intelligent Robots and Systems (IROS 2010), Taipei, Taiwan, 2010.
  • (33) C. Elbrechter, R. Haschke, H. Ritter, Bi-manual robotic paper manipulation based on real-time marker tracking and physical modelling, in: Proceedings of the 2011 IEEE-RSJ International Conference on Intelligent Robots and Systems (IROS 2012), San Francisco, CA, USA, 2011.
  • (34) C. Elbrechter, R. Haschke, H. Ritter, Folding paper with anthropomorphic robot hands using real-time physics-based modeling, in: Proceedings of the 2012 IEEE-RAS International Conference on Humanoid Robotics (HUMANOIDS 2012), Osaka, Japan, 2012.
  • (35)

    M. Fox, D. Long, PDDL 2.1: an extension to PDDL for expressing temporal planning domains, Journal of Artificial Intelligence Research 20 (2003) 61–124.

  • (36) N. Lipovetzky, H. Geffner, Searching for plans with carefully designed probes, in: Proceedings of the 2011 International Conference on Automated Planning and Scheduling (ICAPS 2011), Freiburg, Germany, 2011.
  • (37) J. Rintanen, Madagascar: scalable planning with SAT, in: Proceedings of the 2014 International Planning Competition (IPC 2014), Portsmouth, NH, USA, 2014.
  • (38) M. Fox, R. Howey, D. Long, Validating plans in the context of processes and exogenous events, in: Proceedings of the 20th National Conference on Artificial Intelligence (AAAI 2005), Pittsburgh, Pennsylvania, USA, 2005.
  • (39) M. Cashmore, M. Fox, D. Long, D. Magazzeni, B. Ridder, A. Carrera, N. Palomeras, N. Hurtos, M. Carreras, ROSPlan: planning in the Robot Operating System, in: Proceedings of the 2015 International Conference on Automated Planning and Scheduling (ICAPS 2015), Jerusalem, Israel, 2015.
  • (40) I. Susan, S. Chitta, MoveIt!, http://moveit.ros.org.
  • (41) M. Gombolay, R. Wilcox, J. Shah, Fast scheduling of multi-robot teams with temporospatial constraints, in: Proceedings of Robotics: Science and Systems IX (RSS 2013), Berlin, Germany, 2013.
  • (42) M. Gombolay, R. Gutierrez, G. Starla, J. Shah, Decision making, authority, team efficiency and human worker satisfaction in mixed human-robot teams, in: Proceedings of Robotics: Science and Systems X (RSS 2014), Berkeley, USA, 2014.
  • (43) M. Gombolay, C. Huang, J. Shah, Coordination of human-robot teaming with human task preferences, in: Proceedings of the 2015 AAAI Fall Symposium Series, Palo Alto, CA, USA, 2015.
  • (44) M. Cirillo, L. Karlsson, A. Saffiotti, Human-aware task planning, ACM Transactions on Intelligent Systems and Technology 1 (2) (2010) 1–26.
  • (45) R. Wilcox, S. Nikolaidis, J. Shah, Optimisation of temporal dynamics for adaptive human-robot interaction in assembly manufacturing, in: Proceedings of Robotics: Science and Systems VIII (RSS 2012), Sydney, Australia, 2012.
  • (46) E. Sebastiani, R. Lallement, R. Alami, L. Iocchi, Dealing with on-line human-robot negotiations in hierarchical agent-based task planner, in: Proceedings of the 2017 International Conference on Automated Planning and Scheduling (ICAPS 2017), Pittsburgh, USA, 2017.
  • (47) A. Agostini, C. Torras, F. Wörgötter, Integrating task planning and interactive learning for robots to work in human environments, in: Proceedings of the 22nd International Joint Conference on Artificial Intelligence (IJCAI-11), Barcellona, Spain, 2011.
  • (48) H. Koppula, A. Jain, A. Saxena, Anticipatory planning for human-robot teams, in: Proceedings of Robotics: Science and Systems IX (RSS 2013), Berlin, Germany, 2013.
  • (49) E. Karpas, S. Levine, P. Yu, B. Williams, Robust execution of plans for human-robot teams, in: Proceedings of the 2015 International Conference on Automated Planning and Scheduling (ICAPS 2015), Jerusalem, Israel, 2015.
  • (50) C. Liu, J. Fisac, Goal inference improves objective and perceived performance in human-robot collaboration, in: Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2015), Istanbul, Turkey, 2015.
  • (51) W. Kwon, I. Suh, Planning of proactive behaviours for human-robot cooperative tasks under uncertainty, Knowledge-based Systems 72 (2014) 81–95.
  • (52) R. Caccavale, A. Finzi, Flexible task execution and attentional regulations in human-robot interaction, IEEE Transactions on Cognitive and Developmental Systems 9 (1) (2017) 68–79.
  • (53) S. Harnad, The symbol grounding problem, Physica D 42 (1990) 335–346.
  • (54) M. Krotzsch, F. Simancik, I. Horrocks, A description logic primer, arXiv:1201.4089v3.
  • (55) D. McDermott, PDDL – the Planning Domain Definition Language, Tech. rep., Yale (1998).
  • (56) F. Baader, D. Calvanese, D. McGuinness, D. Nardi, P. Patel-Schneider, The Description Logic handbook: theory, implementation, and applications, Cambridge University Press, New York, NY, USA, 2003.
  • (57) M. Smith, C. Welty, D. McGuinness, OWL web ontology language guide, https://www.w3.org/TR/owl-guide/.
  • (58) I. Horrocks, P. Patel-Schneider, S. Bechhofer, D. Tsarkov, Owl rules: a proposal and prototype implementation, Journal of Web Semantics 3 (1) (2005) 23–40.
  • (59) M. Fox, D. Long, Modelling mixed discrete-continuous domains for planning, Journal of Artificial Intelligence Research 27 (2006) 235–297.
  • (60) A. Capitanelli, M. Maratea, F. Mastrogiovanni, M. Vallati, Automated planning techniques for robot manipulation tasks involving articulated objects, in: Proceedings of the 2017 Conference of the Italian Association for Artificial Intelligence (AIxIA 2017), Bari, Italy, 2017.
  • (61) F. Mastrogiovanni, A. Paikan, A. Sgorbissa, Semantic-aware real-time scheduling in robotics, IEEE Transactions on Robotics 29 (1) (2013) 118–135.