Planning with Trust for Human-Robot Collaboration

01/12/2018 ∙ by Min Chen, et al. ∙ National University of Singapore Carnegie Mellon University University of Washington 0

Trust is essential for human-robot collaboration and user adoption of autonomous systems, such as robot assistants. This paper introduces a computational model which integrates trust into robot decision-making. Specifically, we learn from data a partially observable Markov decision process (POMDP) with human trust as a latent variable. The trust-POMDP model provides a principled approach for the robot to (i) infer the trust of a human teammate through interaction, (ii) reason about the effect of its own actions on human behaviors, and (iii) choose actions that maximize team performance over the long term. We validated the model through human subject experiments on a table-clearing task in simulation (201 participants) and with a real robot (20 participants). The results show that the trust-POMDP improves human-robot team performance in this task. They further suggest that maximizing trust in itself may not improve team performance.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Trust is essential for seamless human-robot collaboration and user adoption of autonomous systems, such as robot assistants. Over-trusting robot autonomy may lead to misuse of such systems, where people rely excessively on automation, failing to intervene in the case of critical failures (Lee and See, 2004). On the other hand, lack of trust leads to disuse of autonomous systems: users ignore the systems’ capabilities, with negative effects on overall performance.

We witnessed an example of users’ distrust in the system in one of our studies, where a human participant and a robot collaborated to clear a table (Figure 1). Although the robot was fully capable of handling all objects on the table, inexperienced participants did not trust that the robot was able to succeed and stopped the robot from moving the wine glass, since they were afraid that the glass may fall and break. It was clear that their trust was poorly calibrated with respect to the robot’s true capabilities. This, in turn, had a significant effect on the interaction.

Figure 1. A robot and a human collaborate to clear a table. The human, with low initial trust in the robot, intervenes to stop the robot from moving the wine glass.

This study revealed that, in order to achieve fluent human-robot collaboration, the robot should monitor human trust and influence it so that it matches the system’s capabilities. In our study, for instance, the robot should build human trust first by acting in a trustworthy manner, before going for the wine glass.

We propose a trust-based computational model of robot decision making: Since trust is not fully observable, we model it as a latent variable in a partially observable Markov decision process (POMDP) (Kaelbling et al., 1998). Our trust-POMDP model contains two key components: (i) a trust dynamics model, which captures the evolution of human trust in the robot, and (ii) a human decision model, which connects trust with human actions. Our POMDP formulation can accommodate a variety of trust dynamics and human decision models. Here, we adopt a data-driven approach and learn these models from data.

Trust-POMDP
Myopic
(a)      Bottle
Figure 2.

Sample runs of the trust-POMDP strategy and the myopic strategy on a collaborative table-clearing task. The top row shows the probabilistic estimates of human trust over time on a 7-point Likert scale. The trust-POMDP strategy starts by moving the plastic bottles to build trust (

) and moves the wine glass only when the estimated trust is high enough (). The myopic strategy does not account for trust and starts with the wine glass, causing the human with low initial trust to intervene ().

Although prior work has studied human trust elicitation and modeling (Lee and Moray, 1992; Floyd et al., 2015; Xu and Dudek, 2015; Wang et al., 2016), we close the loop between trust modeling and robot decision-making. The trust-POMDP enables the robot to systematically infer and influence the human collaborator’s trust, and leverage trust for improved human-robot collaboration and long-term task performance.

Consider again the table clearing example (Figure 2). The trust-POMDP strategy first removes the three plastic water bottles to build up trust and only attempts to remove the wine glass afterwards. In contrast, a baseline myopic strategy maximizes short-term task performance and does not account for human trust in choosing the robot actions. It first removes the wine glass, which offers the highest reward, resulting in unnecessary interventions by human collaborators with low initial trust.

We validated the trust-POMDP model through human subject experiments on the collaborative table-clearing task, both online in simulation (201 participants) and with a real robot (20 participants). Compared with the myopic strategy, the trust-POMDP strategy significantly reduced participants’ intervention rate, indicating improved team collaboration and task performance.

In these experiments the robot always succeeded. Robots, however, fail frequently. What if the robot is likely to fail when picking up the wine glass? The robot should then assess human trust in the beginning of the task; if trust is too high, the robot should effectively communicate this to the human, in order to calibrate human trust to the appropriate level. While human teammates are able to use natural language to communicate expectations (Mathieu et al., 2000), our assistive robotic arm does not have verbal communication capabilities. The trust-POMDP strategy in this case enables the robot to modulate human trust by intentionally failing when picking up the bottles, before attempting to grasp the wine glass. This prompts the human to intervene when the robot attempts to pick up the wine glass, preventing failure.

This paper builds upon our previous work (Chen et al., 2018) by introducing robot failures into the computational framework. In particular, (i) we augment the dynamics model with robot failures, add a new session of data collection to learn the model and discuss the effect of failures on different levels of trust; (ii) we simulate and visualize robot policies with the learned model; (iii) we provide an analysis of the results in the case of an adaptive policy that enables the robot to assess participants’ initial trust and intentionally fail.

Integrating trust modeling and robot decision making enables robot behaviors that leverage human trust and actively modulate it for seamless human-robot collaboration. Under the trust-POMDP model, the robot deliberately chooses to fail in order to reduce the trust of an overly trusting user and achieve better task performance over the long term. Further, embedding trust in a reward-based POMDP framework makes our robot task-driven: when the human collaboration is unnecessary, the robot may set aside trust building and act to maximize the team task performance directly. All these diverse behaviors emerge automatically from the trust-POMDP model, without explicit manual robot programming.

2. Related work

Trust has been studied extensively in the social science research literature (Golembiewski and McConkie, 1975; Kramer and Tyler, 1995), with Mayer et al., suggesting that three general levels summarize the bases of trust: ability, integrity, and benevolence (Mayer et al., 1995). Trust in automation differs from trust between people in that automation lacks intentionality (Lee and See, 2004). Additionally, in a human-robot collaboration task, human and robot share the same objective metric of task performance. Therefore, similar to previous work (Desai, 2012; Xu and Dudek, 2016; Pierson and Schwager, 2016; Pippin and Christensen, 2014; Wang et al., 2016), we assume that human teammates will not expect the robot to deceive them on purpose, and their trust will depend mainly on the perceived robot ability to complete the task successfully.

Binary measures of trust (Hall, 1996), as well as continuous measures (Lee and Moray, 1992; Desai, 2012; Xu and Dudek, 2016), and ordinal scales (Muir, 1990; Hoffman, 2013) have been proposed. For real-time measurement,  Desai (2012) proposed the Area Under Trust Curve (AUTC) measure, which was recently used to account for one’s entire interactive experience with the robot (Yang et al., 2017).

Researchers have also studied the temporal dynamics of trust conditioned on the task performance: Lee and Moray (1992)

proposed an autoregressive moving average vector form of time series analysis;

Floyd et al. (2015) used case-based reasoning; Xu and Dudek (2015) proposed an online probabilistic trust inference model to estimate a robot’s trustworthiness; Wang et al. (2016) showed that adding transparency in the robot model by generating explanations improved trust and performance in human teams; Desai et al. (2013); Desai et al. (2012) showed that robot failures had a negative impact on human trust, and early robot failures led to dramatically lower trust than later robot failures. While previous works have focused on either quantifying trust or studying the dynamics of trust in human-robot interaction, our work enables the robot to leverage upon a model of human trust and choose actions to maximize task performance.

In human-robot collaborative tasks, the robot often needs to reason over the human’s hidden mental state in its decision-making. The POMDP provides a principled general framework for such reasoning. It has enabled robotic teammates to coordinate through communication (Barrett et al., 2014) and software agents to infer the intention of human players in game AI applications (Macindoe et al., 2012). The model has been successfully applied to real-world tasks, such as autonomous driving where the robot car interacts with pedestrians and human drivers (Bai et al., 2015; Bandyopadhyay et al., 2013; Galceran et al., 2015). When the state and action space of the POMDP model become continuous, one can use hindsight optimization (Javdani et al., 2015)

, or value of information heuristics 

(Sadigh et al., 2016), which generate approximate solutions but are computationally more efficient.

Nikolaidis et al. (2015) proposed to infer the human type or preference online using models learned from joint-action demonstrations. This formalism recently extended from one-way adaptation (from robot to human) to human-robot mutual adaptation (Nikolaidis et al., 2016; Nikolaidis et al., 2017), where the human may choose to change their preference and follow a policy demonstrated by the robot in the recent history. In this work, we provide a general way to link the whole interaction history with the human policy, by incorporating human trust dynamics into the planning framework.

3. Trust-POMDP

3.1. Human-robot team model

We formalize the human-robot team as a Markov Decision Process (MDP), with world state , robot action , and human action . The system evolves according to a probabilistic state transition function

which specifies the probability of transitioning from state

to state when actions are applied in state . After transitioning, the team receives a real-valued reward , which is constructed to elicit the desirable team behaviors.

We denote by as the history of interaction between robot and human until time step . In this paper, we assume that the human observes the robot’s current action and then decides their own action. In the most general setting, the human uses the entire interaction history to decide the action. Thus, we can write the human’s (possibly stochastic) policy as which outputs the probability of each human action .

Given a robot policy , the value, i.e., the expected total discounted reward of starting at a state and following the robot and human policies is

(1)

and the robot’s optimal policy can be computed as

(2)

In our case, however, the robot does not know the human policy in advance. It computes the optimal policy under expectation over the human policy:

(3)

Key to solving Eq. 3 is for the robot to model the human policy, which potentially depends on the entire history . The history may grow arbitrary long and make the optimization extremely difficult.

3.2. Trust-dependent human behaviors

Our insight is that in a number of human-robot collaboration scenarios, trust is a compact approximation of the interaction history . This allows us to condition human behavior on the inferred trust level and in turn find the optimal policy that maximizes team performance.

Following previous work on trust modeling (Xu and Dudek, 2015)

, we assume that trust can be represented as a single scaler random variable

. Thus, the human policy is rewritten as

(4)

3.3. Trust dynamics

Human trust changes over time. We adopt a common assumption on the trust dynamics: trust evolves based on the robot’s performance  (Lee and Moray, 1992; Xu and Dudek, 2015). Performance can depend not just on the current and transitioned world state but also the human and robot’s actions

(5)

For example, performance may indicate success or failure of the robot to accomplish a task. This allows us to write our trust dynamics equation as

(6)

We detail in Section 4 how trust dynamics is learned via interaction.

3.4. Maximizing team performance

Trust cannot be directly observed by the robot and therefore must be inferred from the human’s actions. In addition, armed with a model, the robot may actively modulate the human’s trust for the team’s long-term reward.

We achieve this behavior by modeling the interaction as a partially observable Markov decision process (POMDP), which provides a principled general framework for sequential decision making under uncertainty. A graphical model of the Trust-POMDP and a flowchart of the interaction are shown in Figure 3.

To build trust-POMDP, we create an augmented state space with the augmented state composed of the fully-observed world state and the partially-observed human trust . We maintain a belief over the human’s trust. The trust dynamics and human behavioral policy are embedded in the transition dynamics of trust-POMDP. We describe in Section 4 how we learn the trust dynamics and the human behavioral policy.

The robot now has two distinct objectives through its actions:

  • Exploitation. Maximize the team’s reward

  • Exploration. Reveal and change the human’s trust so that future actions are rewarded better.

figs/trust_model                            figs/trust_flowchart

Figure 3. The trust-POMDP graphical model (left) and the team interaction flowchart (right). The robot’s action depends on the world state and its belief over trust .

The solution to a Trust-POMDP is a policy that maps belief states to robot actions, i.e., . To compute the optimal policy, we use the SARSOP algorithm (Kurniawati et al., 2008), which is computationally efficient and has been previously used in various robotic tasks (Bandyopadhyay et al., 2013).

4. Learning Trust Dynamics and Human Behavioral Policies

Nested within the trust-POMDP is a model of human trust dynamics , and behavioral policy . We adopted a data-driven approach and built the two models for the table clearing task from data collected in an online AMT experiment. Suitable probabilistic models derived via alternative approaches can be substituted for these learned models (e.g., for other tasks and domains).

4.1. Data Collection

Table clearing task. A human and a robot collaborate to clear objects off a table. The objects include three water bottles, one fish can, and one wine glass. At each time step, the robot picks up one of the remaining objects. Once the robot starts moving towards the intended object, the human can choose between two actions: {intervene and pick up the object that the robot is moving towards, stay put and let the robot pick the object by itself}. This process is repeated until all the objects are cleared from the table.

Each object is associated with a different reward, based on whether the robot successfully clears it from the table (which we call SP-success), the robot fails in clearing it (SP-fail), or the human intervenes and puts it on the tray (IT). Table 1 shows the rewards for each object and outcome. We assume that a robot success is always better than a human intervention, since it reduces human effort. Additionally, there is no penalty if the robot fails by dropping one of the sealed water bottles, since the human can pick it up. On the other hand, dropping the fish can result in some penalty, since its contents will be spilled on the floor. Breaking the glass results in the highest penalty. We see that staying put when the robot attempts to pick up the bottle has the lowest risk, since there is no penalty if the robot fails. On the other hand, staying put in the case of the glass object has the largest risk-return trade off. We expect the human to let the robot pick up the bottle even if their trust is low, since there is no penalty if the robot fails. On the other hand, if the human does not trust the robot, we expect them to likely intervene on glass or can, rather than risking a high penalty in case of robot failure.

In this work, we choose the table clearing task to test our trust-POMDP model, because it is simple and allows us to analyze experimentally the core technical issues on human trust without interference from confounding factors. Note that the primary objective and contribution of this work are to develop a mathematical model of trust embedded in a decision framework, and to show that this model improves human robot collaboration. In addition, we believe that the overall technical approach in our work is general and not restricted to this particular simplified task. What we learned here on the trust-POMDP for a simplified task will be a stepstone towards more complex, large-scale applications.

Participants. For the data collection, we recruited in total participants through Amazon’s Mechanical Turk (AMT) 111We conducted two sessions of data collection, one where the robot always succeeded and one when the robot failed with high probability. Our previous work (Chen et al., 2018) presents the results of the first session only.. The participants are all from United States, aged 18-65 and with approval rate higher than . Each participant was compensated for completing the study. To ensure the quality of the recorded data, we asked all participants an attention check question that tested their attention to the task. We removed data points either because the participants failed on the attention check question or the their data were incomplete. This left us valid data points for model learning.

Bottle Fish Can Wine Glass
SP-success
SP-fail
IT
Table 1. The reward function for the table-clearing task.

Procedure. Each participant is asked to perform an online table clearing task together with a robot. Before the task starts, the participant is informed of the reward function in Table 1. We first collect the participant’s initial trust in the robot. We used Muir’s questionnaire (Muir, 1990), with a seven-point Likert scale as a human trust metric, i.e., trust ranges from to . The Muir’s questionnaire we used is listed in Table 2. At each time step, the participant watches a video of the robot attempting to pick up an object, and are asked to choose to intervene or stay put. They then watch a video of either the robot picking up the object, or them intervening based on their action selection. Then, they report their updated trust in the robot.

1. To what extent can the robot’s behavior be predicted from
moment to moment?
2. To what extent can you count on the robot to do its job?
3. What degree of faith do you have that the robot will be able
to cope with similar situations in the future?
4. Overall how much do you trust the robot?
Table 2. Muir’s questionnaire.

We are interested in learning the trust dynamics and the human behavioral policies for any state and robot action. However, the number of open-loop 222When collecting data from AMT, the robot follows an open-loop policy, i.e., it does not adapt to the human behavior. robot policies is , where is the number of objects on the table. In order to focus the learning on a few interesting robot policies (i.e. picking up the glass in the beginning vs in the end), while still covering a large space of policies, we split the data collection process, so that in one half of the trials the robot randomly chooses a policy out of a set of pre-specified policies, while in the other half the robot follows a random policy.

Data Format. The data we collected from each participant has the following format:

where is the number of objects on the table. is the estimated human trust at time by averaging the participants’ responses to the Muir’s questionnaire to a single rating between 1 and 7. is the action taken by the robot at time step . is the action taken by the human at time step . is the performance of the robot that indicates whether the robot succeeded at picking up the object, the robot failed, or the human intervened.

4.2. Trust dynamics model

We model human trust evolution as a linear Gaussian system. Our trust dynamics model relates the human trust causally to the robot task performance .

(7)
(8)

where

denotes a Gaussian distribution with mean

and standard deviation

. and are linear coefficients for the trust dynamics, given the robot task performance . In the table clearing task, indicates whether the robot succeeded at picking up an object, the robot failed, or the human intervened, e.g., can represent that the robot succeeded at picking a water bottle, or that the human intervened at the wine glass. and are the observed human trust (Muir’s questionnaire) at time step and time step .

The unknown parameters in the trust dynamics model include , , and

. We performed full Bayesian inference on the model through Hamiltonian Monte Carlo sampling using the Stan probabilistic programming platform 

(Carpenter et al., 2016). Figure 4 shows the trust transition matrices for all possible robot performance in the table clearing task. As we can see, human trust in the robot gradually increased with observations of successful robot actions (as indicated by transitions to higher trust levels when the participants stayed put and robot succeeded), while it decreased with observations of robot failures. Trust tended to remain constant or decrease slightly when interventions occurred. It also appears that that the higher the trust, the greater the loss upon failure, and vice versa upon success. These results matched our expectations that successful robot performance positively influenced trust, while robot failures negatively affected trust.

Figure 4.

Trust transition matrices, which represent the change of trust given the robot performance, shown by the linearly regressed line (yellow) contrasted with the X-Y line (blue). In general, trust stays constant or decreases slightly when the human intervenes (top row). It increases when the human stays put and the robot succeeds (middle row), while it decreases when the robot fails (bottom row).

4.3. Human behavioral policies

Our key intuition in the human model is that human’s behavior depends on the trust in the robot. To support our intuition, we consider two types of human behavioral models. The first model is a trust-free human behavioral model that ignores human trust, while the second is a trust-based human behavioral model that explicitly models human trust. In both human models, we assume humans follow the softmax rule  333According to the softmax rule , the human’s decision of which action to take is determined probabilistically on the actions’ relative expected values. when they make decisions in an uncertain environment (Daw et al., 2006). More explicitly,

  • Trust-free human behavioral model: At each time step, the human selects an action probabilistically based on the actions’ relative expected values. The expected value of an action depends on the human’s belief on the robot to succeed and the risk of letting robot to do the task. In the trust-free human model, the human’s belief on the robot success on a particular task does not change over time.

  • Trust-based human behavioral model: Similar to the model above, the human follows the softmax rule at each time step. However, the trust-based human model assumes that human’s belief on the robot success changes over time, and it depends on human’s trust in the robot.

Before we introduce the models, we start with some notations. Let denote the object that the robot tries to pick at time step . Let be the reward if the human stays put and the robot succeeds, and be the reward if the human stays put and the robot fails. Let be the human trust in the robot at time step .

is the sigmoid function, which is equivalent to the softmax function in the case of binary human actions.

is the Bernoulli distribution that takes action stay put with probability

.

The trust-free human behavioral model is as follows,

(9)
(10)

where, is the human’s belief on the robot successfully picking up object , and it remains constant. is the probability that human stays put at time step . is the action human taken at time step .

Next, we introduce the trust-based human behavioral model:

(11)
(12)
(13)

where is the human’s belief on robot success on object at time step , and it depends on the human’s trust in the robot. and are the linear coefficients for object . is the probability that the human stays put at time step . is the observed human trust from Muir’s questionnaire at time step , and we assume it follow a Gaussian distribution with mean and standard deviation . is the action human taken at time step .

The unknown parameters here include in the trust-free human model, and , , in the trust-based human model. We performed Bayesian inference on the two models above using Hamiltonian Monte Carlo sampling (Carpenter et al., 2016). The trust-based human model (log-likelihood ) fit the collected AMT data better than the trust-free human model (log-likelihood

). The log-likelihood values are relatively low in both two models due to the large variance among different users. Nevertheless, this result supports our notion that the prediction on human behavior is improved when we explicitly model human trust.

Figure 5. The model prediction on the mean of human intervention rate with respect to trust. Under the trust-free human behavioral model, which does not account for trust, the human intervention rate stays constant. Under the trust-based human behavioral model, the intervention rate decreases with increasing trust. The rate of decrease depends on the object; it is more sensitive to the risker objects.

Figure 5 shows the mean probability of human interventions with respect to human’s trust in the robot. For both models, the human tends to intervene more on objects with higher risk, i.e., the human intervention rate on glass is higher than can, which is again higher than bottle. The trust-free human behavioral model ignores human trust, thus the human intervention rate does not change. On the other hand, the trust-based human behavioral model has a general falling trend, which indicates that participants are less likely to intervene when their trust in the robot is high. This is observed particularly for the highest-risk object (glass), where the object intervention rate drops significantly when human trust score is maximum.

To summarize, the results of Sec. 4.2 and Section 4.3 indicate that

  • Human trust is affected by robot performance: human trust can be built up by successfully picking up objects (Figure 4). In addition, it is a good strategy for the robot to start with low risk objects (bottle), since the human is less likely to intervene even if the trust in the robot is low (Figure 5).

  • Human trust affects human behaviors: the intervention rate on the high risk objects could be reduced by building up human trust (Figure 5).

5. Experiments

We conducted two human subjects experiments, one on AMT with human participants interacting with recorded videos and one in our lab with human participants interacting with a real robot. The purpose of our study was to test whether the trust-POMDP robot policy would result in better team performance than a policy that did not account for human trust. To simplify the analysis of the different behaviors in these experiments, we had the robot always succeed when attempting to pick up the objects.

We had two experimental conditions, which we refer to as “trust-POMDP” and “myopic”.

  • In the trust-POMDP condition, the robot uses human trust as a means to optimize the long term team performance. It follows the policy computed from the trust-POMDP described in  Section 3.4, where the robot’s perceived human policy is modeled via the trust-based human behavioral model described in Section 4.3.

  • In the myopic condition, the robot ignores human trust. It follows a myopic policy by optimizing Eq. 3, where the robot’s perceived human policy is modeled via the trust-free human behavioral model described in Section 4.3.

5.1. Online AMT experiment

Hypothesis 1.  In the online experiment, the performance of teams in the trust-POMDP condition will be better than of the teams in the myopic condition.

We evaluated team performance by the accumulated reward over the task. We expected the trust-POMDP robot to reason over the probability of human interventions, and act so as to minimize the intervention rate for the highest reward objects. The robot would do so by actively building up human trust before it goes for high risk objects. On the contrary, the myopic robot policy was agnostic to how the human policy may change from the robot and human actions.

Procedure. The procedure is similar to the one for data collection (Sec. 4.1), with the difference that, rather than executing random sequences, the robot executes the policy associated with each condition. While we kept the Muir’s questionnaire in the experiment as a groundtruth measure of trust, the robot did not use the score, but estimated trust solely from the trust dynamics model as described in Sec. 4.2.

Model parameters. In the formulation of Section 3.4, the observable state variable represents the state of each object (on the table or removed). We assume a discrete set of values of trust : . The transition function incorporates the learned trust dynamics and human behavioral policies, as described in Sec. 4. The reward function is given by Table 1. We used a discount factor of , which favors immediate rewards over future rewards.

Subject Allocation We chose a between-subjects design in order to not bias the users with policies from previous conditions. We recruited 208 participants through Amazon Mechanical Turk, aged and with approval rate higher than . Each participant was compensated for completing the study. We removed wrong (participants failed on the attention check question) or incomplete data points. In the end, we had data points for the trust-POMDP condition, and data points for the myopic condition.

5.2. Real-robot experiment

In the real-robot experiment we followed the same robot policies, model parameters and procedures as the online AMT experiment, with that the participants interacted with an actual robot in person.

Hypothesis 2.  In the real-robot experiment, the performance of teams in the trust-POMDP condition will be better than of the teams in the myopic condition.

Subject Allocation. We recruited 20 participants from our university, aged 21-65. Each participant was compensated for completing the study. All data points were kept for analysis, i.e., data points for the trust-POMDP condition and data points for the myopic condition.

5.3. Team performance

We performed an one-way ANOVA test of the accumulated rewards (team performance). In the online AMT experiment, the accumulated rewards of trust-based condition was significantly larger than the myopic condition . This result supports Hypothesis 1.

Similarly, the accumulated rewards of the trust-based condition was significantly larger than the myopic condition . This result supports Hypothesis 2.

The difference in performance occurred because participants’ intervention rate in the trust-POMDP condition was significantly lower than myopic condition (Figure 6 - left column). In the online AMT experiment, the intervention rate in the trust-POMDP condition was 54% and 31% lower in the can and glass object. In the real-robot experiment, the intervention rate in the trust-POMDP condition dropped to zero (100% lower) in the can object and 71% lower in the glass object.

In the myopic condition, the robot picked the objects in the order of highest to lowest reward (Glass, Can, Bottle, Bottle, Bottle). In contrast, the trust-based human behavior model influenced the trust-POMDP robot policy by capturing the fact that interventions on high-risk objects were more likely if trust in the robot was insufficient. Therefore, the trust-POMDP robot reasoned that it was better to start with the low risk objects (bottles), build human trust (Figure 6 - center column) and go for high risk object (glass) last. In this way, the trust-POMDP robot minimized the human intervention ratio on the glass and can object, which significantly improved the team performance.

Figure 6. Comparison of the Trust-POMDP and the myopic policies in the AMT experiment and the real-robot experiment.
Figure 7. Time-dependent nonlinear effects of trust dynamics. The same outcome has greater effect on trust when it occurs earlier than later.

5.4. Trust evolution

Figure 6 (center column) shows the participants’ trust evolution. We make two key observations. First, successfully completing a task increased participants’ trust in the robot. This is consistent with the human trust dynamics model we learned in Section 4.2. Second, there is a lack of significant difference in the average trust evolution between the two conditions ( Figure 6, center column), especially given that fewer human interventions occurred under the trust-POMDP policy. This can be partially explained by a combination of averaging and nonlinear trust dynamics, specifically that robot performance in the earlier part of the task has a more pronounced impact on trust (Desai, 2012). This is a specific manifestation of the “primacy effect”, a cognitive bias that results in a subject crediting a performer more if the performer succeeds earlier in time (Jones et al., 1968). Figure 7 shows this time-dependent aspect of trust dynamics in our experiment; the change in the mean of trust was larger if the robot succeeded earlier, most clearly seen for the Can and Glass objects in the real-robot experiment. As such, in the myopic condition, although there were more interventions on the glass/can at the beginning, this was averaged out by a larger increase in the human trust.

5.5. Human behavioral policy

Figure 6 (right column) shows the observed human behaviors given different trust levels. Consistent with the trust-based human behavioral model (Section 4.3), participants were less likely to intervene as their trust in the robot increased. The human’s action also depended on the type of object. For low risk objects (bottles), participants allowed the robot’s attempt to complete the task even if their trust in the robot was low. However, for a high risk object (glass), participants intervened unless they trusted the robot more.

Figure 8. Sample run of the trust-POMDP strategy when the robot may fail in the glass cup with probability 0.9.
Figure 9. Sample runs of the performance-maximizing policy (top, middle-row) and the trust-maximizing policy (bottom row) when the robot may fail in the glass cup with probability 0.9, and the robot can fail intentionally in any object. The adaptive trust-POMDP policy branches out at : If the human stays put (top row), the robot intentionally fails in the bottles to reduce human trust and maximize the probability of the human intervening when it goes for the glass at .
Figure 10. (Top) Expected trust for all possible human action sequences for the performance-maximizing and trust-maximizing policy. Each sequence is represented with a line of width proportional to the likelihood of that sequence, based on the learned model. (Bottom) Annotated robot actions for the 16 most likely sequences.
Figure 11. Scatterplot of mean accumulated reward as a function of human trust over time for all human action sequences. The radius of each circle is proportional to the likelihood of the corresponding sequence, based on the learned model. The performance-maximizing policy (blue) gradually reduces human trust to maximize the accumulated reward, while the trust-maximizing policy (green) focuses on increasing trust.

6. Robot Failures

The previous experimental results show that the trust-POMDP policy significantly outperforms the myopic policy that ignores trust in robot decision-making. The trust-POMDP robot was able to make good decisions on whether to pick up the low risk object to increase human trust, or to go directly to the high risk object when trust is high enough. This is one main advantage that trust-POMDP robot has over the myopic robot.

In these experiments the robot always succeeded. However, in the real world the robot is also likely to fail, and we want to explore the behavior of the trust-POMDP when the robot may fail in its attempt to pick up an object with some known probability.

Therefore, we assumed that the robot may fail when attempting to pick up the glass with probability 0.9, and we used the learned dynamics and human behavioral model to compute the robot policy in that case. Contrary to when the robot always succeeds, in this case it is actually beneficial for the human to intervene and pick up the glass themselves, in order to avoid the large penalty from a likely robot failure.

Fig. 8 shows the computed policy and belief updates: the robot starts with the glass cup, since the beginning of the task is when the human is the most likely to intervene and not let the robot pick up the glass (and likely fail in the process of doing so).

While this shows that the robot can reason over human intervention rate to reduce failure, intuitively the robot should also be able to actively reduce trust to affect human behavior. While there is a range of behaviors that can reduce human trust (Wang et al., 2016; van den Brule et al., 2014), we focused on active trust reduction through failures. Therefore, we expanded the robot’s action space, so that it can intentionally fail in any object. Keeping the failure probability for glass at 0.9 and reducing the reward for robot success when picking up the bottles to 0.3 results in the exciting behavior demonstrated at Fig. 9.

When following the trust-POMDP policy (Fig. 9 top and middle row) the robot attempts to pick up the can first; This is an information seeking action, that the robot uses to estimate the initial human trust. If the human stays put, the robot infers that human trust is high, and it will then fail intentionally at the bottles to reduce trust, before going for the glass cup. By the time the robot goes for the glass cup, human trust has been reduced sufficiently so that the human is likely to intervene, avoiding failure. On the other hand, if the human intervenes, the robot infers that the human trust is already low. The robot then does not need to fail intentionally, since it does not need to reduce human trust any further, but it subsequently goes for the glass cup.

The resulting policy contrasts the policy that the robot follows, if it maximizes human trust instead (Fig. 9, bottom row). When following the trust-maximizing policy, the robot starts with the glass. This is for two reasons: (a) in the beginning human trust is the lowest, therefore the human is the most likely to intervene and avoid watching the robot fail, which would result in significant reduction in trust (b) Even if the human does not intervene and the robot fails, it is better to fail early when trust has not increased yet, since the higher the trust, the steeper the fall, based on the learned model of Fig. 4.

We further illustrate the difference between the two policies by simulating policy runs and showing the evolution of the expected trust and mean accumulated reward over time (Fig. 10,  11). The plots illustrate how the performance-maximizing policy reduces human trust to maximize reward. The mean accumulated reward over policy runs for the performance-maximizing policy is , compared to for the trust-maximizing policy, a statistically significant difference (). This evaluation indicates that maximizing trust can be suboptimal in the presence of robotic failures.

7. Conclusion

This paper presents the trust-POMDP, a computational model for integrating human trust into robot decision making. The trust-POMDP closes the loop between trust models and robot decision making. It enables the robot to infer and influence human trust systematically and to leverage trust for fluid collaboration.

Our experimental results in a table-clearing task show that the trust-POMDP policy calibrates human trust to match it to the robot’s manipulation capabilities: If trust is overly low, the robot prioritizes picking up the low risk objects to increase trust. This results in better performance, compared to the myopic robot that ignores trust. On the other hand, if trust is overly high, the robot fails intentionally in the low risk objects. Our results show that always maximizing trust can be in fact detrimental to performance in the presence of robotic failures.

There are several limitations in our current work. Similar to previous works (Xu and Dudek, 2015; Desai, 2012), we modeled trust as a single real-valued latent variable that reflected the capabilities of the entire system. However, a multi-dimensional parameterization of trust that captured the different functions and modes of automation could be be a more accurate representation. In addition, the evolution of trust might also depend on the type of motion executed by the robot (e.g., for expressive or deceptive motions (Dragan et al., 2013, 2014)). The current trust-POMDP model also assumes static robot capabilities, but a robot’s true capabilities may change over time. In fact, the trust-POMDP can be extended to model robot capabilities via additional state variables that affect the state transition dynamics. Furthermore, the reward function is manually specified in this work. However, the reward function may be difficult to specify in practice. One possible way to resolve this is to learn the reward function from human demonstrations (e.g.,  (Nikolaidis et al., 2015)). Finally, the trust model learned on one task may transfer to a related task (Soh et al., 2018). This last aspect is another interesting direction for future work.

8. Acknowledgements

This work was funded in part by the Singapore Ministry of Education (grant MOE2016-T2-2-068), the National University of Singapore (grant R-252-000-587-112), US National Institute of Health R01 (grant R01EB019335), US National Science Foundation CPS (grant 1544797), US National Science Foundation NRI (grant 1637748), and the Office of Naval Research.

References

  • (1)
  • Bai et al. (2015) Haoyu Bai, Shaojun Cai, Nan Ye, David Hsu, and Wee Sun Lee. 2015. Intention-aware online POMDP planning for autonomous driving in a crowd. In 2015 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 454–460.
  • Bandyopadhyay et al. (2013) Tirthankar Bandyopadhyay, Kok Sung Won, Emilio Frazzoli, David Hsu, Wee Sun Lee, and Daniela Rus. 2013. Intention-aware motion planning. In Algorithmic Foundations of Robotics X. Springer, 475–491.
  • Barrett et al. (2014) Samuel Barrett, Noa Agmon, Noam Hazon, Sarit Kraus, and Peter Stone. 2014. Communicating with unknown teammates. In

    Proceedings of the twenty-first european conference on artificial intelligence

    . IOS Press, 45–50.
  • Carpenter et al. (2016) Bob Carpenter, Andrew Gelman, Matt Hoffman, Daniel Lee, Ben Goodrich, Michael Betancourt, Michael A Brubaker, Jiqiang Guo, Peter Li, and Allen Riddell. 2016. Stan: A probabilistic programming language. Journal of Statistical Software 20 (2016), 1–37.
  • Chen et al. (2018) Min Chen, Stefanos Nikolaidis, Harold Soh, David Hsu, and Siddhartha Srinivasa. 2018. Planning with trust for human-robot collaboration. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction. ACM, 307–315.
  • Daw et al. (2006) Nathaniel D Daw, John P O’doherty, Peter Dayan, Ben Seymour, and Raymond J Dolan. 2006. Cortical substrates for exploratory decisions in humans. Nature 441, 7095 (2006), 876–879.
  • Desai (2012) Munjal Desai. 2012. Modeling trust to improve human-robot interaction. (2012).
  • Desai et al. (2013) Munjal Desai, Poornima Kaniarasu, Mikhail Medvedev, Aaron Steinfeld, and Holly Yanco. 2013. Impact of robot failures and feedback on real-time trust. In Proceedings of the 8th ACM/IEEE international conference on Human-robot interaction. IEEE Press, 251–258.
  • Desai et al. (2012) Munjal Desai, Mikhail Medvedev, Marynel Vázquez, Sean McSheehy, Sofia Gadea-Omelchenko, Christian Bruggeman, Aaron Steinfeld, and Holly Yanco. 2012. Effects of changing reliability on trust of robot systems. In Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction. ACM, 73–80.
  • Dragan et al. (2014) Anca D Dragan, Rachel M Holladay, and Siddhartha S Srinivasa. 2014. An Analysis of Deceptive Robot Motion.. In Robotics: science and systems. 10.
  • Dragan et al. (2013) Anca D Dragan, Kenton CT Lee, and Siddhartha S Srinivasa. 2013. Legibility and predictability of robot motion. In Human-Robot Interaction (HRI), 2013 8th ACM/IEEE International Conference on. IEEE, 301–308.
  • Floyd et al. (2015) Michael W Floyd, Michael Drinkwater, and David W Aha. 2015. Trust-Guided Behavior Adaptation Using Case-Based Reasoning. In Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence. 4261–4267.
  • Galceran et al. (2015) Enric Galceran, Alexander G Cunningham, Ryan M Eustice, and Edwin Olson. 2015. Multipolicy decision-making for autonomous driving via changepoint-based behavior prediction. In Proc. Robot.: Sci. & Syst. Conf. 2.
  • Golembiewski and McConkie (1975) Robert T Golembiewski and Mark McConkie. 1975. The centrality of interpersonal trust in group processes. Theories of group processes 131 (1975), 185.
  • Hall (1996) Robert J Hall. 1996. Trusting your assistant. In Knowledge-Based Software Engineering Conference, 1996., Proceedings of the 11th. IEEE, 42–51.
  • Hoffman (2013) Guy Hoffman. 2013. Evaluating fluency in human-robot collaboration. In International conference on human-robot interaction (HRI), workshop on human robot collaboration, Vol. 381. 1–8.
  • Javdani et al. (2015) Shervin Javdani, Siddhartha S Srinivasa, and J Andrew Bagnell. 2015. Shared autonomy via hindsight optimization. arXiv preprint arXiv:1503.07619 (2015).
  • Jones et al. (1968) Edward E Jones, Leslie Rock, Kelly G Shaver, George R Goethals, and Lawrence M Ward. 1968. Pattern of performance and ability attribution: An unexpected primacy effect. Journal of Personality and Social Psychology 10, 4 (1968), 317.
  • Kaelbling et al. (1998) L.P. Kaelbling, M.L. Littman, and A.R. Cassandra. 1998. Planning and acting in partially observable stochastic domains. Artificial Intelligence 101, 1–2 (1998), 99–134.
  • Kramer and Tyler (1995) Roderick M Kramer and Tom R Tyler. 1995. Trust in organizations: Frontiers of theory and research. Sage Publications.
  • Kurniawati et al. (2008) Hanna Kurniawati, David Hsu, and Wee Sun Lee. 2008. SARSOP: Efficient Point-Based POMDP Planning by Approximating Optimally Reachable Belief Spaces.. In Robotics: Science and Systems, Vol. 2008. Zurich, Switzerland.
  • Lee and Moray (1992) John Lee and Neville Moray. 1992. Trust, control strategies and allocation of function in human-machine systems. Ergonomics 35, 10 (1992), 1243–1270.
  • Lee and See (2004) John D Lee and Katrina A See. 2004. Trust in automation: Designing for appropriate reliance. Human Factors: The Journal of the Human Factors and Ergonomics Society 46, 1 (2004), 50–80.
  • Macindoe et al. (2012) Owen Macindoe, Leslie Pack Kaelbling, and Tomás Lozano-Pérez. 2012. Pomcop: Belief space planning for sidekicks in cooperative games. (2012).
  • Mathieu et al. (2000) John E Mathieu, Tonia S Heffner, Gerald F Goodwin, Eduardo Salas, and Janis A Cannon-Bowers. 2000. The influence of shared mental models on team process and performance. Journal of applied psychology 85, 2 (2000), 273.
  • Mayer et al. (1995) Roger C Mayer, James H Davis, and F David Schoorman. 1995. An integrative model of organizational trust. Academy of management review 20, 3 (1995), 709–734.
  • Muir (1990) Bonnie Marlene Muir. 1990. Operators’ trust in and use of automatic controllers in a supervisory process control task. University of Toronto.
  • Nikolaidis et al. (2017) Stefanos Nikolaidis, David Hsu, and Siddhartha Srinivasa. 2017. Human-robot mutual adaptation in collaborative tasks: Models and experiments. International Journal of Robotics Research 36, 5-7 (2017), 618–634.
  • Nikolaidis et al. (2016) Stefanos Nikolaidis, Anton Kuznetsov, David Hsu, and Siddhartha Srinivasa. 2016. Formalizing Human-Robot Mutual Adaptation: A Bounded Memory Model. In HRI. IEEE Press, 75–82.
  • Nikolaidis et al. (2015) Stefanos Nikolaidis, Ramya Ramakrishnan, Keren Gu, and Julie Shah. 2015. Efficient model learning from joint-action demonstrations for human-robot collaborative tasks. In HRI. ACM, 189–196.
  • Pierson and Schwager (2016) Alyssa Pierson and Mac Schwager. 2016. Adaptive inter-robot trust for robust multi-robot sensor coverage. In Robotics Research. Springer, 167–183.
  • Pippin and Christensen (2014) Charles Pippin and Henrik Christensen. 2014. Trust modeling in multi-robot patrolling. In 2014 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 59–66.
  • Sadigh et al. (2016) Dorsa Sadigh, Shankar Sastry, Sanjit A Seshia, and Anca D Dragan. 2016. Planning for autonomous cars that leverages effects on human actions. In Proceedings of the Robotics: Science and Systems Conference (RSS).
  • Soh et al. (2018) Harold Soh, Pan Shu, Min Chen, and David Hsu. 2018. The Transfer of Human Trust in Robot Capabilities across Tasks. arXiv preprint arXiv:1807.01866 (2018).
  • van den Brule et al. (2014) Rik van den Brule, Ron Dotsch, Gijsbert Bijlstra, Daniel HJ Wigboldus, and Pim Haselager. 2014. Do robot performance and behavioral style affect human trust? International journal of social robotics 6, 4 (2014), 519–531.
  • Wang et al. (2016) Ning Wang, David V Pynadath, and Susan G Hill. 2016. Trust calibration within a human-robot team: Comparing automatically generated explanations. In 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 109–116.
  • Xu and Dudek (2015) Anqi Xu and Gregory Dudek. 2015. Optimo: Online probabilistic trust inference model for asymmetric human-robot collaborations. In HRI. ACM, 221–228.
  • Xu and Dudek (2016) Anqi Xu and Gregory Dudek. 2016. Towards Modeling Real-Time Trust in Asymmetric Human–Robot Collaborations. In Robotics Research. Springer, 113–129.
  • Yang et al. (2017) Jessie Yang, Vaibhav Unhelkar, Kevin Li, and Julie Shah. 2017. Evaluating Effects of User Experience and System Transparency on Trust in Automation. In HRI.