## 1 Introduction

With robots stepping out of industrial cages and into mixed workspaces shared with human beings, human-robot collaboration is becoming more and more important [Fong2005, Dias2008, Shah2010].
There is unfortunately a history of serious and sometimes tragic failures in human-automation systems due to inadequate interaction between machines and their operators [McRuer1995, Sarter1997, Saffarian2012a]. The most common reasons for this are mode confusion and “automation surprises”, i.e. *misalignments* between what the automated agent is planning to do and what the human believes it is planning to do.

Our goal is to eliminate such misalignments: we want humans to be able to infer what a robot is planning to do during a collaborative task. This is important even beyond safety reasons [Alami2006], because it enables the human to adapt their own actions to the robot and more effectively achieve common goals [tomasello2005understanding, vesper2010minimal, pezzulo2013human].

Traditionally, human-robot collaboration work has focused on inferring human plans and adapting the robot’s plan in response [Nikolaidis2013, Liu2016].
In contrast, here we are interested in the *opposite*—making sure that humans can make these inferences about the robot.
We envision that ultimately these two approaches will need to work in conjunction to achieve fluent collaboration.
Though it is possible for robots to explicitly communicate their plans via language or text, we focus here on what the beginning of the plan itself implies because 1) people make inferences based on actions [Baker2009], and 2) certain scenarios such as the outdoors or busy factories might render explicit channels undesirable.

We introduce a property of a robot plan that we call *-predictability*: a plan is -predictable if a human can infer the robot’s remaining actions in a task from having observed only the first actions and from knowing the robot’s overall goal in the task.
We make the following contributions based on this property:

#### 1.0.1 An algorithm for generating -predictable plans.

To generate robot plans that are -predictable, we introduce a model for how humans might infer future plans, building on models for action interpretation and plan recognition [Charniak1993, Baker2009]. We then propose a planning algorithm that optimizes for -predictability, i.e. it optimizes plans for how easy it will be for a human to infer the last actions from the initial ones.

Prior work has focused on legibility: generating trajectories that communicate a *set goal* via the initial robot *motion* [Takayama2011, Dragan2014, Szafir2014, Gielniak2011]. This is less useful in task planning situations, where the human already knows the task goal. Instead, -predictability is about communicating the *sequence of future actions* that the robot will take given a *known goal*. The important difference is that these actions are not set *a priori*: optimizing for -predictability means *changing not just the initial, but also the final part* of a plan. It is about finding a final part that is easy to communicate, along with an initial part that communicates it.

Our insight is that initial actions can be used to clarify what future actions will be. We find that in many situations, the robot can select initial actions that might seem somewhat surprising at first, but that make the remaining sequence of actions trivial to anticipate (or “auto-complete”). Fig. 1

shows an example. If the robot acts optimally for the shortest path through all targets, it will go to target 5 first, making target 4 an obvious candidate as a next step, but leaving the future ordering of targets 1, 2, 3 somewhat ambiguous (high probability on multiple sequences). On the other hand, if the robot instead chooses target 1, users can with high probability predict that the remaining sequence will be 2-3-4-5.

#### 1.0.2 An online user study testing that we can generate -predictable plans.

We conduct an online user study in which participants observe a robot’s partial plan, and anticipate the order of the remaining actions. We find that participants are significantly better at anticipating the correct order when the robot is planning for -predictability. We also find a correlation between our model’s prediction of the probability of success, and the participants’ actual success rate.

#### 1.0.3 An in-person user study testing the implications on human-robot collaboration.

Armed with evidence that we can make plans -predictable, we move on to an in-person study that puts participants in a collaborative task with the robot, and study the advantages of -predictability on objective and subjective collaboration metrics. We find that participants were more effective at the task, and prefer to work with a -predictable robot than with an optimal robot.

## 2 Defining and Optimizing for -Predictability

#### 2.0.1 -Predictability.

We consider a task planning problem from a starting state with an overall goal that can be achieved through a series of actions, called a plan, within a finite horizon . Let denote the space of all feasible plans of length (up to) that achieve the goal.

###### Definition 1

The -predictability of a feasible plan that achieves an overall goal is the probability of an observer correctly inferring after observing , and knowing the overall goal . Specifically, this is given by .

###### Definition 2

A t-predictable planner generates the plan that maximizes t-predictability out of all those that achieve the overall goal . That is, a -predictable planner generates the action series such that .

This is equivalent, by the general product rule, to:

(1) |

#### 2.0.2 Illustrative Example.

Fig. 2 shows the outcome of optimizing for -predictability in a Traveling Salesman context, with , , and targets, and considers the theoretical -predictability for each plan, with the number of *actually* observed targets (which may be different from the assumed by the planner). The -predictable plan (gray, left) is the best when the observer sees no actions, since it is the optimal plan. However, it is no longer the best plan when the observer gets to see the first action: whereas there are multiple low-cost remaining sequences after the first action in the -predictable plan, there is only one low-cost remaining sequence after the first action in the -predictable plan (blue, center). The first action in the -predictable (orange, right) seems irrational, but this plan is optimized for when the observer gets to see the first two actions: indeed, after the first two actions, the remaining plan is maximally clear.

#### 2.0.3 Relation to Predictability.

-predictability generalizes predictability [Dragan2013]. For , the -predictability of a plan simply becomes its predictability, that is, the ease with which the entire sequence of actions can be inferred with knowledge of the overall goal , i.e. .

#### 2.0.4 Relation to Legibility.

Legibility [Dragan2014] as applied to task planning would maximize the probability of the goal given the beginning of the plan, i.e.
. In contrast, with -predictability the robot is given a high-level goal describing some state that the world needs to be brought into (for example, clearing all objects from a table), and the observer is *already* aware of this goal. Instead of communicating the goal, the robot conveys the remainder of the plan using the first few elements, maximizing .

One important implication is that for -predictability, unlike for legibility, *there is no a-priori set entity to be conveyed.* The algorithm searches for *both* a beginning and a remainder of a plan such that, by observing the former, the observer can correctly guess the latter.

Furthermore, legibility and -predictability entail a different kind of information encoding: in legibility, the robot uses a partial trajectory or action sequence to indicate a single goal state, whereas in -predictability the robot uses a partial action sequence to indicate *another* action sequence. Therefore, one entails a mapping from a large space to a small set of possibilities (the finite candidate goal states), whereas the other entails a mapping between spaces of equivalent size.

The distinction between task-level legibility and -predictability is crucially important, particularly in collaborative settings. If you are cooking dinner with your household robot, it is important for the robot to act legibly so you can infer *what* goal it has when it grabs a knife (e.g., to slice vegetables).
But, it is equally important for the robot to act in a -predictable manner so that you can predict *how* it will accomplish that goal (e.g., the order in which it will cut the vegetables).

#### 2.0.5 Relation to Explicability.

Explicability [zhang2016explicability] has been recently introduced to measure whether the observer could assign labels to a plan. In this context, explicability would measure the existence of any remainder of a plan that achieves the goal, as opposed to optimizing the probability that the observer will infer the robot’s plan.

#### 2.0.6 Boltzmann Noisy Rationality.

Computing -predictability entails computing the conditional probability from (1). We model the human as expecting the robot to be noisily optimal, taking approximately the optimal sequence of actions to achieve . Boltzmann probabilistic models of such noisy optimality (also known as the Luce-Shepard choice rule in cognitive science) have been used in the context of *goal* inference through inverse action planning [Baker2009]. We adopt an analogous approach for modeling the inference of *task plans*.

We define optimality via some cost function , mapping each feasible plan, from a starting state and for a particular goal, to a scalar cost. In our experiment, for instance, we use path length (travel distance) for . Applying a Bolzmann policy [Baker2009] based on , we get:

(2) |

Here is termed the *rationality coefficient*. As

the probability distribution converges to one for the optimal sequence and zero elsewhere; that is, the human models the agent as rational. As

, the probability distribution becomes uniform over all possible sequences and the human models the agent as indifferent.#### 2.0.7 -Predictability Optimization.

#### 2.0.8 Approximate Algorithm for Large-Scale Optimization.

The challenge with the optimization in (3) is the denominator: it requires summing over all possible plan remainders. Motivated by the fact that plans with higher costs contribute exponentially less to the sum, we propose to approximate the denominator by only summing over the lowest-cost plan remainders.

Many tasks have the structure of Traveling Salesman Problems (TSP), where there are a number of subgoals whose order is not constrained but influences total cost. Van der Poort et al. [van1999solving] showed how to efficiently compute the best solutions to the standard (cyclic tour) TSP using a branch-and-bound algorithm.

The key mechanism is successively dividing the set of feasible plans into smaller subsets for which a lower bound on the cost can be computed by some heuristic. When a subset of solutions has a lower bound higher than the smallest

costs evaluated so far, it is discarded entirely, while the remaining subsets continue to be broken up. The process continues until only feasible plans remain. This method is guaranteed to produce the solutions with the least cost and can significantly reduce time complexity over exhaustive enumeration. In particular, it has been shown that for the standard TSP, computation is in [van1999solving], while exhaustive enumeration requires computation in . While heuristics are domain-specific, we expect that this method can be widely applicable to robot task planning problems. Further, we expect to be a small number in realistic applications, limited by people’s ability to reason about long sequences of actions.To empirically evaluate the consequences of this approximation of -predictability, we computed the exact and approximate (using ) -predictability for all possible plans in 270 randomly generated unique scenes (Fig. 3).
If we choose the maximally -predictable sequences for each scene using both the exact and approximate calculations of -predictability, we find that these sequences agree in 242 (out of 270) scenes for 1-predictability and in 263 for 2-predictability^{1}^{1}1For 0-predictability, the denominator is the same in all plans, so all 270 scenes agree trivially..
For the sequences that disagree, the exact -predictability of the sequence chosen using the approximate method is 89.5% of the optimal -predictability in the worst case, and 99% of the optimal -predictability on average. This shows that the proposed approximation is highly effective at producing -predictable plans.

## 3 Online Experiment

We set up an experiment to test that our -predictable planner is in fact -predictable. We designed a web-based virtual human-robot collaboration experiment in a TSP setting, where the human had to predict the behavior of three robot avatars using different planners. Participants watched the robots move to a number of targets (either zero, one, or two) and had to predict the sequence of remaining targets the robot would complete.

### 3.1 Independent Variables

We manipulated two variables: the -predictable planner (for ) and the number of observed targets (for ).

#### 3.1.1 Planner.

We used three different planners which differed in their optimization criterion: the number of targets assumed known to the observer. Each participant interacted with three robot avatars, each using one of the following three planners:

*Optimal (-predictable):* This robot chooses the shortest path from the initial location visiting all target locations once; that is, the “traditional” solution to the open TSP. This robot solves (3) for .

*1-predictable:* This robot solves (3) for ; the sequence might make an inefficient choice for the first target in order to make the sequence of remaining targets clear.

*2-predictable:* This robot solves (3) for ; the sequence might make an inefficient choice for the first *two* targets in order to make the sequence of remaining targets clear.

#### 3.1.2 Number of observed targets.

Each subject was shown the first targets of the robot’s chosen sequence in each trial and was asked to predict the remainder of the sequence. This variable was manipulated between participants; thus, a given participant always saw the same number of initial targets on all trials.

### 3.2 Procedure

The experiment was divided into two phases: a training phase to familiarize participants with TSPs and how to solve them, and an experimental phase. We additionally asked participants to fill out a survey at the end of the experiment.

In the training phase, subjects controlled a human avatar. They were instructed to click on targets in the order that they believed would result in the quickest path for the human avatar to visit all of them. The human avatar moved in a straight line after each click and “captured” the selected target, which was then removed from the display.

For the second phase of the experiment, participants saw a robot avatar move to either , , or targets. After moving to these targets, the robot paused so that participants could predict the remaining sequence of targets by clicking on the targets in the order in which they believed the robot would complete them. Afterwards, participants were presented with an animation showing the robot moving to the rest of the targets in the sequence determined by the corresponding planner.

#### 3.2.1 Stimuli.

Each target layout displayed a square domain with five or six targets. There were a total of 60 trials, consisting of four repetitions of 15 unique target layouts in random order: one for the training phase, in addition to the three experimental conditions. The trials were grouped so that each participant observed the same robot for three trials in a row before switching to a different robot. In the training trials, the avatar was a gender-neutral cartoon of a person on a scooter, and the robot avatars were images of the same robot in different poses and colors (either red, blue, or yellow).

#### 3.2.2 Layout Generation.

The layouts for the 15 trials were based from an initial database of 270 randomly generated layouts. This number was reduced down to 176 in which the chosen sequence was different between all three planners so that the stimuli were distinguishable. We also discarded some scenarios in which the robot’s trajectory approached a target without capturing it, to avoid confounds. Out of these valid layouts, we chose the ones with the highest theoretical gain in 1-predictability to 2-predictability, to avoid scenarios where the information gain was marginal.

#### 3.2.3 Attention Checks.

After reading the instructions, participants were given an attention check in the form of two questions asking them the color of the targets and the color of the robot that they would not be evaluating.

#### 3.2.4 Controlling for Confounds.

We controlled for confounds by counterbalancing the colors of the robots for each planner; by using a human avatar in the practice trials; by randomizing the trial order; and by including attention checks.

### 3.3 Dependent Measures

#### 3.3.1 Objective measures.

We recorded the proportion of correct predictions of the robot’s sequence of targets out of all 15 trials for each planner, resulting in a measure of error rate. We additionally computed the Levenshtein distance between predicted and actual sequences of targets. This is a more precise measure of how similar participants’ predictions were to the actual sequences produced by the planner.

#### 3.3.2 Subjective measures.

After every ninth trial of the experiment, we asked participants to indicate which robot they preferred working with. At the end the experiment, each participant was also asked to complete a questionnaire to evaluate their perceived performance of three robots. An informal analysis of this questionnaire suggested similar results as those obtained from our other measures (see Section 3.6). Thus, because of space constraints, we have omitted specifics of the survey in this paper.

### 3.4 Hypotheses

H1 - Comparison with Optimal. *When showing 1 target, the -predictable robot will result in lower error than the optimal baseline. When showing 2 targets, the -predictable robot will result in lower error than the optimal baseline.*

H2 - Generalization. The error rate will be lowest when : the number of targets shown, , equals the number of targets assumed by the -predictable planner, .

H3 - Preference. The perceived performance of the robots will be highest when .

### 3.5 Participants

We recruited a total of 242 participants from Amazon’s Mechanical Turk using the psiTurk experimental framework [Gureckis15]. We excluded 42 participants from analysis for failing the attention checks, leaving a net total of participants. All participants were treated in accordance with local IRB standards and were paid $1.80 USD for an average of 22 minutes of work, plus an average performance-based bonus of $0.47.

### 3.6 Results

#### 3.6.1 Model validity.

We first looked at the validity of our model of -predictability with respect to people’s performance in the experiment. We computed the theoretical -predictability (probability of correctly predicting the robot’s sequence from the targets the user saw) for each task layout under each planner and number of targets the users observed. We also computed people’s actual prediction accuracy on each of these layouts under each condition, averaged across participants.

We computed the Pearson correlation between -predictability and participant accuracy, finding a correlation of

; the confidence interval around the median was computed using 10,000 bootstrap samples (with replacement).

*This high correlation suggests that our model of how people predict action sequences of other agents is a good predictor of their actual behavior.*

#### 3.6.2 Accuracy.

To determine how similar people’s predictions of the robots’ sequences were to the actual sequences, we used two objective measures of accuracy: first, overall error rate (whether they predicted the correct sequence or not), as well as the Levenshtein distance between the predicted and correct sequences (Fig. 4).

As the two measures have qualitatively similar patterns of results, and the Levenshtein distance is a more fine-grained measure of accuracy, we performed quantitative analysis only on the Levenshtein distance. We constructed a linear mixed-effects model with the number of observed targets ( from 0 to 2) and the planner for -predictability ( from 0 to 2) as fixed effects, and trial layout as random effects.

This model revealed significant main effects of the number of observed targets () and planner () as well as an interaction between the two (). We ran post-hoc comparisons using the multivariate adjustment. Comparing the planners across the same number of targets, we found that
in the 0-targets condition the optimal (or 0-predictable) robot was better than the other two robots; in the 1-target condition, the 1-predictable robot was better than the other two; in the 2-target prediction, the 2-predictable robot was better than the optimal and 1-predictable robots.
All differences were significant with except the difference between the 2-predictable robot and the 1-predictable robot in the 2-target condition (). Comparing the performance of a planner across number of targets, we found significant differences in all contrasts, with one exception: the accuracy when using the optimal planner was not significantly different when seeing 1 target vs 2 targets ().
Overall, these results support our hypotheses H1 and H2, that *accuracy is highest when used in the planner equals , the number of observed targets.*

#### 3.6.3 Preferences over time.

Fig. 5

shows the proportion of participants choosing each robot planner at each trial. We constructed a logistic mixed-effects model for binary preferences (where 1 meant the robot was chosen) with planner, number of observed targets, and trial as fixed effects and participants as random effects. The planner and number of observed targets were categorical variables, while trial was a numeric variable.

Using Wald’s tests, we found a significant main effect of the planner () and trial (). We detected only a marginal effect of number of targets (). However, there was a significant interaction between planner and number of targets (). We also found interactions between planner and trial () and between number of targets and trial (), as well as a three-way interaction (). Post-hoc comparisons with the multivariate adjustment for -values indicated that for the 0-targets condition, the optimal robot was preferred over the 1-predictable robot () and the 2-predictable robot (). For the 1-target condition, the 1-predictable robot was preferred over the optimal robot () and the 2-predictable robot (). In the two-task condition, we did not detect a difference between the two 1-predictable and 2-predictable robots (), though both were preferred over the optimal robot ( for the 1-predictable robot and for the 2-predictable robot).

Overall, these results are in line with our hypothesis H3 that *the perceived performance is highest when used in the planner equals , the number of observed targets.* This is the case for and , but not : even though users tended to perform better with the -predictable robot, its suboptimal actions in the beginning seemed to confuse and frustrate users (see Qualtitative feedback results for details).

#### 3.6.4 Final rankings.

The final rankings of “best robot” and “worst robot” are shown in Fig. 6. For each participant, we assigned each robot a score based on their final rankings. The best robot received a score of 1; the worst robot received a score of 2; and the remaining robot received a score of 1.5. We constructed a logistic mixed-effects model for these scores, with planner and number of observed targets as fixed effects, and participants as random effects; we then used Wald’s tests to check for effects.

We found significant main effects of planner () and number of targets (), as well as an interaction between them (). We again performed post-hoc comparisons using the multivariate adjustment. These comparisons indicated that in the 1-target condition, people preferred the optimal robot over the 1-predictable robot () and the 2-predictable robot (). In the 1-target condition, there was a preference for the 1-predictable robot over the optimal robot, however this difference was not significant (). The 1-predictable robot was preferred to the 2-predictable robot (). In the 2-target condition, both the 1-predictable and 2-predictable robots were preferred over the optimal robot ( for the 1-predictable robot, and for the 2-predictable robot), though we did not detect a difference between the the 1-predictable and 2-predictable robots themselves (). Overall, these rankings are in line with the preferences over time.

#### 3.6.5 Qualitative feedback.

At the end of the experiment, we asked participants to briefly comment on each robot. For , responses typically favored the optimal robot, often described as “efficient” and “logical”, although they also showed some reservations:
*“close to what I would do but just a little bit of weird choices tossed in”*.
Conversely, for , the optimal robot was likened to “a dysfunctional computer”,
and described as “ineffective” or “very robotic”: *“I feel like maybe I’m a dumb human and the [optimal] robot might be the most efficient, because I have no idea. It frustrated me.”*

The 2-predictable robot had mixed reviews for : for some it was “easy to predict”,
others found it
“misleading” or noted its “weird starting points”. For , it was reported as “useless”, “all over the place”, and *“terribly unintuitive with an abysmal sense of planning”*; one participant wrote it *“almost seemed like it was trying to trip me up on purpose”*
and another one declared
*“I want to beat this robot against a wall.”*

The 1-predictable robot seemed to receive the best evaluations overall: though for many users found it “random”, “frustrating” and “confusing”, for it almost invariably had positive reviews (“sensible”, “reasonable”, “dependable”, “smart”, “on top of it”), being likened to “a logical and rational human” and even eliciting positive emotions: *“You’re my boy, Blue!”*, *“I like the little guy, he thinks like me”*, or *“It was my favorite. I started thinking ‘Don’t betray me, Yellow!’ as it completed its sequence.”*

#### 3.6.6 Summary.

Our -predictability planner worked as expected, with the -predictable robots leading to the highest user prediction accuracy given the first targets. However, focusing on just -predictability at the expense of -predictability frustrated our users. Overall, we believe -predictability will be important in a task for all s, and hypothesize that optimizing for a weighted combination will perform best in practice.

We note that

is problem-specific and can be expected to decay as the difficulty of the task increases; in each setting, it can be estimated from participant data. Although

was chosen ahead of time in our experiment to be , our results are validated by the correlation between expected and observed human error rates.The optimal choice of is also a subject for further investigation and is likely context-specific. Depending on the particular task, there should be a different trade-off between predictability of later actions and that of earlier actions.

## 4 User Study

Having tested our ability to produce -predictable sequences, we next ran an in person study to test their implications. Participants used a smartphone to operate a remote-controlled Sphero BB-8 robot, and had to predict and adapt to the actions of an autonomous Pioneer P3DX robot in a collaboration scenario (Fig. 1).

### 4.1 Independent Variables

We manipulated one single variable, *planner*, as a within-subjects factor. Having confirmed the expected effects of the different planners in the previous experiment, and given the good overall performance of the 1-predictable planner across different conditions, we decided to omit the 2-predictable agent and focus on testing the implications of 1-predictable with respect to optimal in a more immersive collaborative context.

### 4.2 Procedure

At the beginning of the experiment, participants were told that they and their two robot friends were on a secret mission to deactivate an artifact. In each of 4 trials, the autonomous P3DX navigated to the 5 power sources and deactivated them in sequence; however, security sensors activated at each power source after 3 or more had been powered down. The subject’s mission was to use BB-8 to jam the sensors at the third, fourth and fifth power sources before the P3DX arrived at them, by steering BB-8 into the corresponding sensor for a short period of time.

After an initial practice phase in which participants had a chance to familiarize themselves with the objective and rules of the task, as well as with the BB-8 teleoperation interface, there were two blocks of 4 trials whose order was counterbalanced across participants. In each block, the subject collaborated with the P3DX under a different task planner which we referred to as different robot “personalities”.

#### 4.2.1 Stimuli.

Each of the 5 power sources (targets) in each trial was projected onto the floor as a yellow circle, using an overhead projector (Fig 1). Each circle was initially surrounded by a projected blue ring representing a dormant sensor. When the P3DX reached a target, both the circle and the ring were eliminated, except When the P3DX reached the third target, in which case the blue circles turned red symbolizing their switch into active state. Whenever BB-8 entered a ring, the ring turned green for 2 seconds and then disappeared, indicating successful jamming. If the P3DX moved over a red ring, a large red rectangle was projected, symbolizing capture and the trial ended in failure. Conversely, if the P3DX completed all 5 targets without entering a red ring, a green rectangle indicated successful completion of the trial.

#### 4.2.2 Layout Generation.

The 4 layouts used were taken from the larger pool of 15 layouts in the online experiment. There was a balance between layouts where online participants had been more accurate with the optimal planner, more accurate with the 1-predictable planner, or similarly accurate.

#### 4.2.3 Controlling for Confounds.

We controlled for confounds by counterbalancing the order of the planners; by using a practice layout; and by randomizing the trial order.

### 4.3 Dependent Measures

#### 4.3.1 Objective measures.

We recorded the number of successful trials for each subject and robot planner, as well as the number of trials where participants jammed targets in the correct sequence.

#### 4.3.2 Subjective measures.

After every block of the experiment, each participant was also asked to complete a questionnaire (adapted from [Dragan2015]) to evaluate their perceived performance of the P3DX robot. At the end of the experiment, we asked participants to indicate which robot (planner) they preferred working with.

### 4.4 Hypotheses

H4 - Comparison with Optimal. *The -predictable robot will result in more successful trials than the optimal baseline.*

H5 - Preference. Users will prefer working with the -predictable robot.

### 4.5 Participants

We recruited 14 participants from the UC Berkeley community, who were treated in accordance with local IRB standards and paid $10 USD. The study took about 30 min.

### 4.6 Results

#### 4.6.1 Successful completions.

We first looked at how often participants were able to complete the task with each robot. We constructed a logistic mixed-effects model for completion success with planner type as a fixed effect and participant and task layout as random effects. We found a significant effect of planner type (), with the 1-predictable robot yielding more successful completions than the optimal robot (). This supports H4.

#### 4.6.2 Prediction accuracy.

We also looked at how accurate participants were at predicting the robots’ sequence of tasks, based on the order in which participants jammed tasks. We constructed a logistic mixed-effects model for prediction accuracy with planner type as a fixed effect and participant and task layout as random effects. We found a significant effect of planner type (), with the 1-predictable robot being more predictable than the optimal robot ().

#### 4.6.3 Robot preferences.

We asked participants to pick the robot they preferred to collaborate with. We found that 86% () of participants preferred the predictable robot, while the rest () preferred the optimal robot. This result is significantly different from chance (). This supports H5.

#### 4.6.4 Perceptions of the collaboration.

We analyzed participants’ perceptions of the robots’ behavior (Fig. 7) by averaging each participant’s responses to the individual questions for each robot and measure, resulting in a single score per participant, per measure, per robot. We constructed a linear mixed-effects model for the survey responses with planner and measure type as fixed effects, and with participants as random effects. We found a main effect of planner () and measure (). Post-hoc comparisons using the multivariate method for -value adjustment indicated that participants preferred the predictable robot over the optimal robot () by an average of points on the Likert scale.

## 5 Discussion

In what remains, we summarize our work and discuss future directions, including the application of -predictability beyond task planning.

#### 5.0.1 Summary.

We enable robots to generate -predictable plans, for which a human observing the first actions can confidently infer the rest. We tested the ability to make plans -predictable in a large-scale online experiment, in which subjects’ predictions of the robot’s action sequence significantly improved. In an in-person study, we found that -predictability can lead to significant objective and perceived improvements in human-robot collaboration compared to traditional optimal planning.

#### 5.0.2 -predictability for Motion.

Even though -predictability is motivated by a task planning need, it does have an equivalent in motion planning: find an initial trajectory , such that the remainder can be inferred by a human observer with knowledge of both the start state and the goal state . Modeling this conditional probability with a Boltzmann model yields

where is the set of feasible trajectories from to . Using a second order expansion of the cost about the optimal remaining trajectory, we get:

(4) |

where denotes the Hessian.
This implies that generating a -predictable trajectory means finding a configuration for time such that the optimal trajectory from that configuration to the goal is in a *steep*, high-curvature minimum: other trajectories from to the goal would be significantly higher cost. For instance, if the robot had the option between two passages, it would choose the more *narrow* passage because that enables the observer to more accurately predict the remainder of its trajectory.

#### 5.0.3 Limitations and Future Work.

Our work is limited by the focus in our experiments on TSP scenarios (though we emphasize that -predictability as it is formulated in Section 2 is not inherently limited to TSPs). This work is also limited by the choice of a user study that involved a tele-operated avatar to mediate the human’s physical collaboration with the robot. Applications to other scenarios that involve direct physical collaboration and preconditions would be interesting topics to investigate. Additionally, while our work showcases the utility of -predictability, a main remaining challenge is determining what or combination of s to use for arbitrary tasks. This decision requires more sophisticated human behavior models, which are the topic of our ongoing work.