Representing Robot Task Plans as Robust Logical-Dynamical Systems

08/05/2019 ∙ by Chris Paxton, et al. ∙ Nvidia 0

It is difficult to create robust, reusable, and reactive behaviors for robots that can be easily extended and combined. Frameworks such as Behavior Trees are flexible but difficult to characterize, especially when designing reactions and recovery behaviors to consistently converge to a desired goal condition. We propose a framework which we call Robust Logical-Dynamical Systems (RLDS), which combines the advantages of task representations like behavior trees with theoretical guarantees on performance. RLDS can also be constructed automatically from simple sequential task plans and will still achieve robust, reactive behavior in dynamic real-world environments. In this work, we describe both our proposed framework and a case study on a simple household manipulation task, with examples for how specific pieces can be implemented to achieve robust behavior. Finally, we show how in the context of these manipulation tasks, a combination of an RLDS with planning can achieve better results under adversarial conditions.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 5

page 7

page 8

page 9

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

For robots to solve real problems in unstructured dynamic settings, they must be able to intelligently execute tasks that consist of many interdependent steps. In addition, they must be able to react to changing circumstances, falling back or retrying steps when needed, to ensure that they consistently arrive at the correct goal state despite perturbations, environmental changes, or uncertainty resolution. This means that real-world systems will have a complex interacting set of skills that must be used at the appropriate time to achieve a goal.

The most common way to build complex behavior is via either manual scripting or hierarchical finite state machines [1]. These systems, unfortunately, quickly grow in complexity and become difficult to expand and maintain. Fallback or recovery behaviors must be programmed manually, requiring substantial engineering work to create truly robust and reactive behavior. Behavior Trees (BTs) address many of these issues around reactivity and ease of use [21, 22, 7]. However, BTs have complex internal structure that may produce unexpected effects, and the resulting behavior is often not transparent or easily verifiable. Program flow in a BT is determined by a number of complex operations such as sequence, selector, and decorator nodes [21, 7].

We propose an alternative system that innately enables fallback behaviors, resulting in quick reactions to changing perceptual inputs without the typical explosion of interconnections that one would observe when designing hierarchical FSMs. Our Robust Logical-Dynamical Systems (RLDS) abstract out the internal structure of these models. More broadly, RLDS are a type of reactive program which can be automatically constructed from a list of operators with specified preconditions and effects. Logical constraints are propagated through the RLDS after construction to ensure eventual convergence to a goal state.

Fig. 1: A Robust Logical-Dynamical System (RLDS) can represent reactive and robust behaviors. Here, our system successfully opens a drawer and places a can inside it despite various unforeseen challenges including sensor noise and interference from a human.

Take, for example, the simple task of putting away a can of spam in a drawer (Fig. 1). Our robot must open the drawer, pick up a can of soup, place it in the drawer, and close the drawer. In an FSM, we would see that nearly every one of these steps can connect to every other step under the right conditions. If the can falls out of the robot’s hand, it must attempt to re-grasp it. If someone walks by and slams the drawer shut, the robot must put the can down and re-open it. If that same person walks by and puts the can away, the robot is finished, and doesn’t need to open the drawer. All of these conditions make manual task creation fairly complicated, and also make it much harder to compose different tasks and sub-tasks in an intelligent way. We see many of the same issues in a Behavior Tree; while composition is easier, specifying internal structure and preconditions on each state is still fairly difficult.

Robust Logical-Dynamical Systems (RLDS) are often substantially simpler for a large class of commonplace reactive recovery. Rather than specifying the internal structure of a task, a plan simply lists operators – high-level task states like “open the drawer” – in order of importance (usually as measured by either sequential proximity to the goal condition or priority as an evasive reaction). Each operator is associated with sets of preconditions, run conditions, and effects, to govern when it is allowed to be executed. RLDS work well with smooth, reactive real-time motion generation tools such as RMPs [5]; the underlying continuous behavior of each operator is designed to drive the system toward state-transitions which enable us to concretely prove guarantees of the combined system dynamics on convergence to the goal.

Our contributions are:

  • We present RLDS as a system for reliable task execution which implicitly creates robust recovery behaviors.

  • We derive an algorithm to automatically compose multiple RLDS, and show how to use RLDS as a part of a simple task planning algorithm.

  • We prove theoretical convergence and convergence rate results showing that under mild covering conditions on the enterability of the operators RLDS’s are guaranteed to converge to the goal condition despite perturbations of bounded probability.

We show our system in action on both a simulated and a real-world household object manipulation task, where the robot needs to construct a plan that will pick up one or more objects and place them in a drawer, as shown in Fig. 1.

Ii Related Work

McGann et al. [18] divide approaches into three categories: reactive systems, three-layer approaches, and planning-centric approaches. Reactive approaches involve encoding all behavior as a part of the underlying Finite State Machine (FSM) or Behavior Tree (BT). Examples include the controller sequencing [3], hybrid automatons [8, 9], SMACH [1], and many BT implementations [21, 22, 15, 7]. Three-layer approaches divide between high-level deliberative planning, a sequencer in the middle, and low-level behavior, e.g. [12]. Planning-centric approaches include [20, 11, 2, 19]. Often, for real-world systems, these still rely heavily on manually defined behaviors and hierarchies to achieve robust, reactive behavior, which raises the question of how best to define such behaviors and combine them.

Behavior Trees (BTs) have proven to be a powerful framework for specifying complex behaviors [7]. They have been used in medical [15] and industrial applications [21]. They are user-friendly [22], with strong analogies to programming languages in their structure [6]. One of the chief advantages of BTs is that all program behavior is determined by an internal logical state, which means that trees can be easily combined with one another to get robust behavior without a large amount of manual tuning. This is an important characteristic we retain in our system as well: conditions are continuously evaluated to determine which actions should be executed [6]. However, the RLDS completely abstracts out internal details of the task plan, allowing us to specify problems purely in terms of goals and sets of operators as in PDDL [14].

Linear Temporal Logic (LTL) is similarly a way of specifying complex task constraints [24]. Creating behaviors which satisfy these constraints can be difficult, however.

Another common way of specifying planning problems is via the Planning Domain Definition Language [14], which has a very similar structure to our own problem definitions. BTs commonly use preconditions similar to those placed on our operators to achieve complex behavior [21, 7], and have been extended in the past to add PDDL-style preconditions and effects for the purposes of planning [25].

We also see an analogy to Hierarchical Task Networks [13], which are a framework with the same representative power as PDDL/STRIPS but with hierarchical decomposition used to decrease planning complexity. These are one of the most commonly used system in practice, and we also use this sort of hierarchical decomposition to enable code reuse and to simplify the correct sequencing constraints.

Task and Motion Planning (TAMP), e.g. [27], is a field that concerns itself with reasoning both about motion about logical constraints. TAMP approaches can solve very complex problems, but aren’t designed to generate reactive behavior and aren’t concerned with plan composition or plan extensibility. They generally require online re-planning for reactivity, but their computational complexity can make that intractable. We aim to avoid replanning in all but the most egregious cases, and instead rely on activating a succession of reactive low-level controllers at the correct times that could in principle include some planning. In the future, each RLDS could be be constructed from a plan generated by a more computationally intensive TAMP solver; in that context it can be viewed as a way to robustly execute task plans leveraging reactions and systematic plan-operator re-entry with provable goal convergence guarantees.

Also of note, this form of hierarchical, reactive behavior has recently shown up in machine learning methods as well. Neural Task Programming, for example, hierarchically evaluates policies in order to reproduce a task performance in a new environment from a single video demonstration 

[29]. In the future conditions and policies for an RLDS could be learned.

Iii Overview of Approach

Our goal is to describe a sequence of tasks in such a way that our system can automatically generate a robust reactive behavior to handle its execution. Intuitively, each task should have an associated logical condition describing whether it can be run (its runnable condition), and its goal should be to push the system toward the runnable condition of the next state. Moreover, these logical conditions should be Markov in the sense that we can classify whether a given state can be run independent of whether we know the history of states that have been run before, and they should (ideally) have a covering property meaning that we can always enter and run at least one of the tasks.

The Markov property enables the system to run any task whose runnable condition is met independent of whether it enters into that logical state via a controlled transition from successful execution of the preceding task or through some random perturbation from either external factors or the execution of reactions. Additionally, it enables us to implement a form of priority on the tasks. Multiple tasks may be runnable simultaneously, so we establish a convention that the most downstream of those takes precedence. If the entire sequence drives the system toward the goal, this convention implements a form of shortcutting since the most downstream task is closest to the goal. However, we may also define additional tasks downstream even from the goal state which act as reactions or evasive maneuvers. Those are always run with highest priority if needed.

RLDS’s are agnostic to the specifics of the underlying behavior generation technique. In many cases, we can choose from a collection of low-level policies (e.g. implemented as RMPs), use specialized local feedback policies trained by Reinforcement Learning (RL) or Imitation Learning (IL), or even use guidance from motion plans computed by TAMP a solver used to generate the plan on which the RLDS is constructed. We just need to be able to characterize the behavior of the underlying policy to bound the probability of the policy resulting in a logical state transition, as described in the next section.

Unlike some prior work [27], we don’t worry about the “motion planning” part of the problem. The idea is that we have a lot of specific policies that can do different things, and need to intelligently switch between them to get strong behavior. In theory, each of these policies can be trained on a very narrow set of conditions, e.g. opening or closing a drawer, opening a can, turning a knob, etc. We also assume that our low-level control policies have their own convergence guarantees.

In general, plans are most easily described as a chain of very general logical operators, which doesn’t innately describe the full Markov logical state summarizing the result of having executed part of the chain. Sec. V details how to transform a plan into an RLDS using a simple algorithm that propagates conditions backward through each operator in the plan starting from the goal to generating a least-constraining set of logical conditions that fully describe the required Markov logical states.

Iv Formulation

Let denote the continuous state of the robot and world observable by the perception system, and denote the logical state by . We can represent

as a vector of binary values giving the truth value of a set of all groundings of logical predicates

. We denote the logical predicates generically as , where are their associated terms. I.e. given a continuous state the predicate takes on a truth value for each valid combination of terms (the predicate grounding); collecting those truth values up across all grounded predicates gives the logical state .

For example, one predicate used in our manipulation case study from Sec. VI is is_attached_to(robot_part, object). The terms robot_part and object might be grounded by end_effector and sugar_box, respectively, giving the grounded predicate is_attached_to(end_effector, sugar_box) a particular binary truth value. This grounded predicate evaluates to when the end-effector is holding the sugar box and to otherwise.

We denote a logical condition by . Each logical condition can be represented as a logical function of the available predicates. These logical conditions in practice are often implemented simply as a conjunction of a list of predicates. For each logical condition, there is a specific set of continuous states and associated logical states which make it true. Denoting the full joint continuous-logical state space (referred to typically as simply the logical state space) as , we often also overload the notation to denote the set of all logical state vectors that render the logical condition true: . For instance, if we have two logical conditions and such that , we can say .

Iv-a Robust Logical-Dynamical Chains

The simplest form of an RLDS is a Robust Logical-Dynamical Chain (RLDC); all RLDS’s discussed in this paper (including those produced by hierarchical combination or automatically generated by a planner) can be reduced to a chain. This section presents a formal mathematical construction of the RLDC which is used in the analysis of Section IV-B.

The fundamental element of an RLDC is termed an operator . We can view each operator as a tuple of logical conditions and an associated policy . is the entry condition defining whether the operator can be entered (if is true, we say the operator is “enterable”), is the run condition defining whether operator can be continue to be run if it has already been entered (if true, it’s said to be “runnable”), and defines the expected logical condition that results from running the operator. The distinction between and can be used to implement robust entry into a state to prevent oscillations resulting from stochastic dynamics, allowing the system, for instance, to optionally attempt a maneuver before resetting. In many practical cases .

The policy is defined as , where is the set of control actions. Each pushes the system from toward satisfying the operator’s effects . In conjunction with the system’s logical state dynamics , which we require to be Markov in the standard sense,111Note that the underlying continuous state space typically already has Markov dynamics, but the Markov requirement dictates that the logical state be expressive enough to also maintain this Markov property. the policy creates a natural system flow through the logical state space which we can characterize concretely in terms of .

We say a state is feasible under operator if it satisfies the runnable condition . And we call a sequence of feasible states generated by the underlying policy a feasible sequence. Likewise, we say a state sequence terminates if either an infeasible state is reached (resulting in an infeasible sequence) or if becomes satisfied. Note that there are potentially many policies that can implement the same logical behavior; in many ways the logical behavior is agnostic to policy choice aside from differences in overall convergence properties characterized below.

We are now equipped to define the RLDC. An RLDC is a sequence of operators for which the following local chaining properties hold between pairs of operators:

(1)

These properties state that the effect of operator implies enterability into the next operator , which in turn implies runnability of that operator. Additionally, we call an RLDC complete if . Note that an operator with an associated policy can be viewed as a task, so an RLDC can be seen as a sequence of tasks.

At all times the system always enters a downstream operator if possible; additionally, if the logical state becomes infeasible under the current operator for any reason the system transitions into the most downstream enterable operator (which usually ends up being an upstream operator). This gives an implicit priority to the operators, with downstream operators (e.g. those closer to the goal) taking precedence.

If each generates a feasible sequence until achieving with probability 1, then the chaining condition of Equation 1 implies trivially that the RLDC converges to in transitions. In general, however, system perturbations, perceptual uncertainty resolution, or stochastic dynamics commonly produce in infeasible sequences resulting in spurious uncontrolled transitions. Section IV-B analyzes convergence and convergence rate of these chains despite uncontrolled transitions.

Iv-B Theoretical Analysis: Convergence of Chains to

Intuitively, the underlying dynamics of each drives the system toward , resulting in successful forward transitions toward the goal. If the likelihood of backward transitions can be bounded we should be able to prove convergence to the goal. This section makes that observation concrete.

We say induces a controlled transition with probability if with that probability it generates a feasible sequence terminating in satisfied. Similarly, sequences terminating in infeasibility are said to generate uncontrolled transitions.

Theorem 1.

(Convergence) Robust logical-dynamical chains achieve with probability 1 if each logical state induces a controlled transition with probability , converging exponentially in the number of uncontrolled transitions . Moreover, the system takes an expected number of transitions upper bounded by .

Proof.

For simplicity, we analyze the case where all . The probability of never experiencing an uncontrolled transition and reaching the goal in a single run through the chain is bounded below by . Similarly, the probability of experiencing exactly uncontrolled transitions (i.e. not reaching the goal times) is bounded by (wherein the uncontrolled transition is all the way back to the start). Therefore, the probability of successfully reaching the goal with at most uncontrolled transitions is

for . This probability converges exponentially to 1 as .

Moreover, if the system experiences exactly uncontrolled transitions before succeeding, the largest number of steps it can take is . Therefore, the expected number of transitions is

where again . Noting that

the expectation reduces to

As an example, if each transition is successful with probability and there are states, we would expect the number of transitions to be . The expected (upper bound) inflation factor is given by . For , we get ; for , we get .

Iv-C Execution

Alg. 1 describes execution of a Robust Logical-Dynamical Chain. We follow the paradigm used in Behavior Trees [21, 7] and in hierarchical learned models (e.g. [23, 17, 29] whereby hierarchical decisions are made constantly and in nearly real-time. While plan construction takes place offline, we assume that inferring the specific policy to execute occurs at close to real time (e.g. at 10-30 hz). The core of the algorithm, at every time , checks multiple to determine the highest-priority operator whose preconditions are met, assigns this to the current operator .

function Execute(Operators , Goal )
     
     while not  do
         
          // get latest continuous world state
          // compute predicates on current state
         for  do
              if  and  then
                  ; break
              else if  and  then
                  ; break
              end if
         end for
         Step
     end while
end function
Algorithm 1 Execution of a Robust Logical-Dynamical Chain that achieves goal

Iv-D Equivalence to Behavior Trees

Fig. 2: Any given RLDS can be modeled as an equivalent Behavior Tree (BT). Likewise, RLDS can be composed in the same manner as BTs. The Behavior Tree implementation here uses some internal nodes with memory (Sequence*) to denote when a node has passed its preconditions. The RLDS checks entry or run conditions as necessary. The differentiation between run conditions and preconditions is very important for robust execution without oscillations between states.

This formulation of the RLDS, while very convenient for concisely programming reactive behaviors for robust execution, can be easily shown to be as expressive as Behavior Trees, a common framework for representing hierarchical tasks that has been increasingly popular both in robotics and in other fields such as video games [21, 7]. In a Behavior Tree, flow of operations is determined by internal logical nodes, computed based on the output of child nodes. We could equivalently express any of these BTs as an RLDS by adding logical predicates to describe internal operations of the tree. Similarly, program flow in an RLDS could be easily described as a BT. Fig. 2 shows a BT (top) and an equivalent RLDS (bottom).

The RLDS has a couple additional features, including the addition of the run conditions automating state transitions implementing reactions and recovery behaviors, which while possible in a BT are often cumbersome to program and error prone. Relative to Robust Logical-Dynamical Systems as they have been defined so far, Behavior Trees do have some advantages in that they can be easily composed to produce more complicated behavior. Section V introduces an analogous form of composibility for RLDS’s which greatly improves reusability of subcomponents.

V Plan Composition

While Alg. 1 allows us to easily and robustly execute an arbitrary RLDS, it leaves us with one major problem: the conditions must be exhaustively specified, and the solutions are not easy to combine or re-use in new contexts. Fortunately, we have solutions to these problems that show how multiple RLDS can be combined hierarchically or sequentially through different algorithms.

The point of including the effects is that they give us two specific advantages. First, we can detect failures, i.e. when a particular policy was unable to reach its goal condition after some amount of time Second, we can use these to compose plans and compute implicit conditions guarding when these policies can be entered and executed in order to guarantee that we will eventually arrive at our logical goal .

V-a Hierarchical Composition

Hierarchical composition is a useful capability for any task representation, as it allows for code re-use and increased generalization, and simplified debugging, making programming substantially simpler. One of the chief advantages of BTs over systems like FSMs is how easy it is to compose two BTs [7]: composition is determined by the logical nodes and structure surrounding the tree, which does not itself need to change or add any new connections. The general idea remains the same in an RLDS, though for an RLDS the additional logical structure that governs execution is instead given through the predicate sets , , and .

Given a plan , we can create a new operator , where is the expected cumulative effects of , and and are both empty. To create specialized behaviors, we can simply add predicates to these two sets. For example, imagine a task where we want to attempt to open a cabinet until that has been accomplished, then pick up an object, as explored in Sec. VI. We can define a re-usable pickup RLDS as , and then add the cabinet_is_open predicate to and to ensure it is executed at the correct time in the task plan. Then we can specify the plan:

V-B Sequential Composition

The logical sets corresponding to each of our different operators are not guaranteed to match up to enable them to be arbitrarily sequenced to execute tasks – in fact, they most likely will not. General-purpose operators have conditions that will almost always be met, leading to oscillations in task execution and a failure to reach a goal state.

In practice, however, there are certain implicit conditions added to our operators by the task plan and the goal. In order to reach that goal, we need to enforce that each operator actually takes us into the precondition space of the remaining sequence of actions. The implicit conditions are the set of predicates that are set by preceding actions then used by future actions or are a part of the goal. This is enough to restrict the set of logical state trajectories to just those that allow for the task to be accomplished.

For example, take a subtask , such as “pick up the soup can.” This includes reactive steps to move to a pre-grasp, cage the object, close the gripper, and lift the object up. In order to combine it with the full plan we need to add the necessary boundary conditions that differentiate it from the other steps in the plan, such as the fact that the drawer should be open before we pick up an object.

These conditions can be computed by a simple recursive algorithm that works back from to compute implicit conditions defining the necessary execution order. The core idea is that we want to find predicates that are required later in a plan, either by or some . We propagate the set of implicitly required conditions back from the end of the plan, removing entries in the set when they are expected results of an operator, e.g. when , and adding more entries as we see new required preconditions of subsequent operators. In addition, we can create the most general set of implicit conditions by only adding implicit conditions when they are effects of a prior operator in the plan.

function GetImplicitConditions(, )
     given: Plan , goal
     // backwards pass: compute implicit conditions
     
     ,
     for  do
         
     end for
     return
end function
Algorithm 2 Computation of Implicit Conditions

The algorithm for computing the set of implicit conditions on plan execution is given in Alg. 2. Once has been computed, we can simply state that for all , and .

V-C Planning

Finally, we discuss how these two components can be combined to automatically generate task plans for execution. Given that a set of preconditions, effects, and optionally a hierarchical decomposition is given for any particular RLDS domain, we could use any of a variety of STRIPS-style planners [10] to solve the problem. In addition, an RLDS can easily be phrased as a PDDL planning domain with a goal condition; the run-condition set can generally be ignored when planning.

For our purposes, we use a simple greedy algorithm that is effective in generating solutions in our domain. Given a node in a search tree defined by the logical state with depth

, we apply all possible operators and search according to a simple heuristic

, where:

to encourage the planner to quickly find short paths. The complete planning algorithm is given in Alg. 3. Here, the Update function looks up the search node corresponding to logical state and updates the back-pointer to the best parent node, as per search, and the Backup operator takes a logical state and finds its parents from the tree search.

function PlanAndExecute(, , )
     Given: initial state , goal , operators
      PriorityQueue()
     Add to
     while  do
          pop() // get lowest cost logical state
         if  then
              break
         end if
         for  do
              if  then
                   // apply effects from operator
                  if  then
                       UpdateNode(, parent=)
                  else
                       Push(, ) // add to queue
                  end if
              end if
         end for
     end while
      Backup()
      = GetImplicitConditions(, )
     Execute(, )
end function
Algorithm 3 Simple Task Planning and Execution with Robust Logical-Dynamical Systems

Future work will look at using more advanced planning approaches in place of the simple tree search used here, including the integration of full-scale PDDL planners like FastDownward [16] for planning of more complex longer-horizon tasks.

Vi Case Study

We explore our proposed task representation in the form of a kitchen manipulation system. This robot is expected to pick up objects and move them around in human environments, which necessitates some amount of reactivity.

Vi-a Example Operators

Fig. 3: An example of grasping behavior from our case study. This corresponds to three low-level operators, each with their own preconditions, run conditions, and effects: approach, cage, and grasp. This sub-sequence can be re-used in many different parts of our task plan.

We will discuss three specific operators implemented in our kitchen manipulation case study: approach, cage, grasp. Each of these operators can be applied to any object defined in the kitchen manipulation domain. These constitute an RLDS whose purpose is to grasp an arbitrary object. The grasping behavior is shown in Fig. 3. There are correspondingly three crucial predicates in this RLDS:

  • in_approach_region(robot, obj),

  • around_obj(robot, obj), and

  • is_attached_to(robot, obj).

The in_approach_region(robot, obj) predicate defines whether or not the robot is on track to complete a grasp. It defines a cylindrical volume along a line between a standoff position and a known good grasp. around_obj(robot, obj) defines an error margin around this grasp position, and is_attached_to(robot, obj) is true after we have closed our gripper around an object.

The three policies will move us between the continuous state space corresponding to each of these predicates. Ideally, for an RLDS, all policies will themselves be reactive and able to respond very quickly when objects move or change positions. Our policies are implemented as Riemannian Motion Policies [5].

Policies need to be somewhat flexible, so that they can support multiple grasps and multiple object orientations. We implement a lookup table which will lookup a 6-DOF goal pose from a list of poses based on some user-provided cost function. With current pose and goal , where is the current end-effector pose, with as the rotation matrix and as the translation component. Given , we choose:

where and are defined by the problem domain. We also use this same function to determine if the arm is currently in the approach volume, substituting for the closest point along a line between the standoff position and the grasp position. We define motion policies that can robustly move us to any of the pose offsets specified in the domain.

Vi-B Simulation Experiments

We perform robotic manipulation experiments in a simulated kitchen environment. The goal is to take a random set of objects and place them inside a randomly chosen drawer in the kitchen domain described above. Policies were manually defined to move to any of a number of user demonstrated grasp locations on the objects.

We use three objects from the YCB dataset [4]: a can of spam, a sugar box, and a tomato soup can. All of these have to be grasped in a different way and placed into one of two drawers in a kitchen cabinet. Initial object positions are randomized within the reachable workspace of the robot.

To show the benefits of reactivity we compare three different execution strategies: linear execution with and without replanning, and our proposed reactive execution algorithm.

Linear execution w/ replanning

We call our planning algorithm once to generate a plan as a sequence of operators, and then compute implicit conditions on the plan as per Alg. 3. This plan can only be executed in order: we check for the current operator and for the subsequent operator . If is met, we move on to the next operator and execute it. If it is not met but is, we continue executing the current operator. This does not allow the system to repeat sequences of actions, or to adapt to noisy interactions with the world.

Linear execution w/o replanning

If neither nor are true at the current timestep, we replan and recompute implicit conditions, then execute again, essentially re-running all of Alg. 3. This case allows the robot to adapt, but is more computationally intensive. To make planning more efficient, we limited ourselves to a subset of all available operators.

Reactive execution

Finally, we demonstrate our results on a reactive plan as per Alg. 3, executed via Alg. 1.

We ran 10 trials, recording duration, success or failure, and number of replanning attempts. We demonstrate reactivity through adversarial interference: 5 seconds after the robot opens a drawer the first time, the trial pauses and the drawer is closed a random amount. Then execution resumes, and the robot must determine how to handle its new environment.

Vi-C Results

Table I shows results from simulations. We see that replanning and reactive achieve similar performance on randomly-generated tasks, though replanning may take slightly longer, and that without either replanning or reactivity we could not handle stochastic interactions between the robot and objects in its environment.

The effect of stochastic simulation is particularly apparent in the poor performance of the non-reactive task model: 60% of the trials fail, even without adversarial interference. As seen in Fig. 4, This often occurs when a grasp ends up in a slightly different pose than expected, meaning that the conditions were not met precisely. In an RLDS, the system would automatically transition back to a known good state (such as movement to a standoff position) and retry the grasp. Another issue is that linear execution does not recognize when execution can jump ahead. The reactive models will instantly adapt if, for example, the can is accidentally knocked into the drawer.

Success Rate (%) Completion Time (s)
Baseline
   Linear Execution 60%
   Linear with Replanning 100%
   Reactive Plan 100%
With Interference
   Linear Execution 0% n/a
   Linear with Replanning 100%
   Reactive Plan 100%
TABLE I: Execution time of plans with random goals under different conditions. Reactive plans are more efficient than replanning and are robust to environmental variations.
Fig. 4: Without reactivity, the robot cannot recover from bad approaches or handle the stochasticity of a realistic environment. In this case, the robot attempted to grab a can of spam, but came at a slightly bad angle. In the RLDS, it retries this grasp until it succeeds. In a more traditional system, this results in a failure.

In our planning tests, we replanned 7 times out of 10 before adding any interference, and an average of 1.8 times per trial in the second set of experiments. In our simple case, this did not translate to a significant difference in execution time, but in more complex environments replanning would quickly become infeasible. It also allows for less natural, less responsive behavior. In the future, we could imagine much more complex tasks that use a mixture of both planning and RLDS execution to achieve superior performance on difficult real-world problems.

Vi-D Real-World Experiments

Fig. 5: In the real world experiments, a human pushed on the robot, shoved its end effector, interfered with the drawer, and moved the object around. The RLDS was able to smoothly adapt to changing circumstances and complete the task regardless.

We also tested our RLDS on a similar task in the real world version of the kitchen in Fig. 3, shown in Fig. 5

. Here, we tested opening a drawer and placing a single YCB object (a can of spam) inside it. This was made more challenging by the use of stochastic perception: the robot needed both to estimate its current position relative to a camera, and to detect and track the object in order to pick it up and place it. This necessitates a more complex task plan.

We executed on a Franka Emika Panda 7-DoF manipulator. The world state, including the position of the robot and the state of the cabinet, was estimated using DART [26]. Objects were detected with PoseCNN [28]. We added new operators to retract the arm and to detect the objects and start tracking. When the arm nears objects, it will “unfix” its position in DART, and start jointly optimizing the robot’s base position with the rest of the scene. This meant that robust execution was extremely important, and that the robot sometimes needed to back off from an attempt to grasp the object or drawer and retry the grasp before succeeding.

We also introduced an adversary during execution. A user shoved on the drawer to close it after the robot opened it, shoved the robot’s hand as it attempted to grasp either the handle or the spam, or pushed the spam out of the way. The RLDS was able to recognize all of these failure cases and backtrack accordingly, retrying the necessary steps in order to complete its task. Experiments are shown in Fig. 1, Fig. 5, and the accompanied video.222https://youtu.be/l_8pzcRGztk

Vii Discussion and Conclusions

We proposed a powerful, versatile framework that allows for the creation and composition of reactive robot task plans. The RLDS will automatically capture fallback behaviors and connections between different robot policies that need to be executed, and allow us to adapt to changes in the environment. RLDS can be constructed automatically, and are a useful, composable way to build realistic robot behaviors by capturing them as a series of preconditions, run conditions, and effects. In the future, we will use RLDS to describe a wider range of behavior and look into integration with task and motion planning.

References

  • [1] J. Bohren and S. Cousins (2010) The smach high-level executive [ros news]. IEEE Robotics & Automation Magazine 17 (4), pp. 18–20. Cited by: §I, §II.
  • [2] J. L. Bresina, A. K. Jónsson, P. H. Morris, and K. Rajan (2005) Activity planning for the mars exploration rovers.. In ICAPS, pp. 40–49. Cited by: §II.
  • [3] R. R. Burridge, A. A. Rizzi, and D. E. Koditschek (1999) Sequential composition of dynamically dexterous robot behaviors. The International Journal of Robotics Research 18 (6), pp. 534–555. Cited by: §II.
  • [4] B. Calli, A. Singh, J. Bruce, A. Walsman, K. Konolige, S. Srinivasa, P. Abbeel, and A. M. Dollar (2017) Yale-CMU-Berkeley dataset for robotic manipulation research. The International Journal of Robotics Research 36 (3), pp. 261–268. Cited by: §VI-B.
  • [5] C. Cheng, M. Mukadam, J. Issac, S. Birchfield, D. Fox, B. Boots, and N. Ratliff (2018) RMPflow: a computational graph for automatic motion policy generation. arXiv preprint arXiv:1811.07049. Cited by: §I, §VI-A.
  • [6] M. Colledanchise and P. Ögren (2016) How behavior trees generalize the teleo-reactive paradigm and and-or-trees. In 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 424–429. Cited by: §II.
  • [7] M. Colledanchise and P. Ögren (2018) Behavior trees in robotics and AI: an introduction. Cited by: §I, §II, §II, §II, §IV-C, §IV-D, §V-A.
  • [8] M. Egerstedt (2000) Behavior based robotics using hybrid automata. In International Workshop on Hybrid Systems: Computation and Control, pp. 103–116. Cited by: §II.
  • [9] C. Eppner, S. Höfer, R. Jonschkowski, R. Martín-Martín, A. Sieverling, V. Wall, and O. Brock (2016) Lessons from the amazon picking challenge: four aspects of building robotic systems.. In Robotics: Science and Systems, Cited by: §II.
  • [10] R. E. Fikes and N. J. Nilsson (1971) STRIPS: a new approach to the application of theorem proving to problem solving. Artificial intelligence 2 (3-4), pp. 189–208. Cited by: §V-C.
  • [11] J. Frank and A. Jónsson (2003) Constraint-based attribute and interval planning. Constraints 8 (4), pp. 339–364. Cited by: §II.
  • [12] E. Gat, R. P. Bonnasso, R. Murphy, et al. (1998) On three-layer architectures. Artificial intelligence and mobile robots 195, pp. 210. Cited by: §II.
  • [13] I. Georgievski and M. Aiello (2014) An overview of hierarchical task network planning. arXiv preprint arXiv:1403.7426. Cited by: §II.
  • [14] M. Ghallab, A. Howe, C. Knoblock, D. McDermott, A. Ram, M. Veloso, D. Weld, and D. Wilkins (1998) PDDL—the planning domain definition language. AIPS-98 planning committee 3, pp. 14. Cited by: §II, §II.
  • [15] B. Hannaford, R. Bly, I. Humphreys, and M. Whipple (2018) Behavior trees as a representation for medical procedures. arXiv preprint arXiv:1808.08954. Cited by: §II, §II.
  • [16] M. Helmert (2006) The fast downward planning system. Journal of Artificial Intelligence Research 26, pp. 191–246. Cited by: §V-C.
  • [17] D. Huang, S. Nair, D. Xu, Y. Zhu, A. Garg, L. Fei-Fei, S. Savarese, and J. C. Niebles (2018) Neural task graphs: generalizing to unseen tasks from a single video demonstration. arXiv preprint arXiv:1807.03480. Cited by: §IV-C.
  • [18] C. McGann, E. Berger, J. Bohren, S. Chitta, B. Gerkey, S. Glaser, B. Marthi, W. Meeussen, T. Pratkanis, E. Marder-Eppstein, et al. (2009) Model-based, hierarchical control of a mobile manipulation platform. In ICAPS Workshop on Planning and Plan Execution for Real-World Systems, Thessaloniki, Greece, Cited by: §II.
  • [19] C. McGann, F. Py, K. Rajan, H. Thomas, R. Henthorn, and R. McEwen (2008) A deliberative architecture for auv control. In 2008 IEEE International Conference on Robotics and Automation, pp. 1049–1054. Cited by: §II.
  • [20] N. Muscettola, G. A. Dorais, C. Fry, R. Levinson, and C. Plaunt (2002) Idea: planning at the core of autonomous reactive agents. Cited by: §II.
  • [21] C. Paxton, A. Hundt, F. Jonathan, K. Guerin, and G. D. Hager (2017) CoSTAR: instructing collaborative robots with behavior trees and vision. In Robotics and Automation (ICRA), 2017 IEEE International Conference on, pp. 564–571. Cited by: §I, §II, §II, §II, §IV-C, §IV-D.
  • [22] C. Paxton, F. Jonathan, A. Hundt, B. Mutlu, and G. D. Hager (2018) Evaluating methods for end-user creation of robot task plans. Intelligent Robots and Systems (IROS), 2018 IEEE International Conference on. Cited by: §I, §II, §II.
  • [23] C. Paxton, V. Raman, G. D. Hager, and M. Kobilarov (2017)

    Combining neural networks and tree search for task and motion planning in challenging environments

    .
    In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 6059–6066. Cited by: §IV-C.
  • [24] E. Plaku and S. Karaman (2016) Motion planning with temporal-logic specifications: progress and challenges. AI communications 29 (1), pp. 151–162. Cited by: §II.
  • [25] F. Rovida, B. Grossmann, and V. Krüger (2017) Extended behavior trees for quick definition of flexible robotic tasks. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 6793–6800. Cited by: §II.
  • [26] T. Schmidt, R. A. Newcombe, and D. Fox (2014) DART: Dense Articulated Real-Time Tracking. In Robotics: Science and Systems, Vol. 2. Cited by: §VI-D.
  • [27] M. Toussaint (2015) Logic-geometric programming: an optimization-based approach to combined task and motion planning. In Twenty-Fourth International Joint Conference on Artificial Intelligence, Cited by: §II, §III.
  • [28] Y. Xiang, T. Schmidt, V. Narayanan, and D. Fox (2017)

    PoseCNN: a convolutional neural network for 6d object pose estimation in cluttered scenes

    .
    arXiv preprint arXiv:1711.00199. Cited by: §VI-D.
  • [29] D. Xu, S. Nair, Y. Zhu, J. Gao, A. Garg, L. Fei-Fei, and S. Savarese (2018) Neural task programming: learning to generalize across hierarchical tasks. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 1–8. Cited by: §II, §IV-C.