Trust as Extended Control: Active Inference and User Feedback During Human-Robot Collaboration

04/22/2021
by   Felix Schoeller, et al.
3

To interact seamlessly with robots, users must infer the causes of a robot's behavior and be confident about that inference. Hence, trust is a necessary condition for human-robot collaboration (HRC). Despite its crucial role, it is largely unknown how trust emerges, develops, and supports human interactions with nonhuman artefacts. Here, we review the literature on trust, human-robot interaction, human-robot collaboration, and human interaction at large. Early models of trust suggest that trust entails a trade-off between benevolence and competence, while studies of human-to-human interaction emphasize the role of shared behavior and mutual knowledge in the gradual building of trust. We then introduce a model of trust as an agent's best explanation for reliable sensory exchange with an extended motor plant or partner. This model is based on the cognitive neuroscience of active inference and suggests that, in the context of HRC, trust can be cast in terms of virtual control over an artificial agent. In this setting, interactive feedback becomes a necessary component of the trustor's perception-action cycle. The resulting model has important implications for understanding human-robot interaction and collaboration, as it allows the traditional determinants of human trust to be defined in terms of active inference, information exchange and empowerment. Furthermore, this model suggests that boredom and surprise may be used as markers for under and over-reliance on the system. Finally, we examine the role of shared behavior in the genesis of trust, especially in the context of dyadic collaboration, suggesting important consequences for the acceptability and design of human-robot collaborative systems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/18/2021

Does human-robot trust need reciprocity?

Trust is one of the hallmarks of human-human and human-robot interaction...
research
10/08/2021

Explaining Reward Functions to Humans for Better Human-Robot Collaboration

Explainable AI techniques that describe agent reward functions can enhan...
research
03/14/2021

Repairing Human Trust by Promptly Correcting Robot Mistakes with An Attention Transfer Model

In human-robot collaboration (HRC), human trust in the robot is the huma...
research
04/12/2021

Building Mental Models through Preview of Autopilot Behaviors

Effective human-vehicle collaboration requires an appropriate un-derstan...
research
04/07/2021

Synthesized Trust Learning from Limited Human Feedback for Human-Load-Reduced Multi-Robot Deployments

Human multi-robot system (MRS) collaboration is demonstrating potentials...
research
11/05/2019

Cognitive and motor compliance in intentional human-robot interaction

Embodiment and subjective experience in human-robot interaction are impo...
research
09/14/2019

Commitments in Human-Robot Interaction

An important tradition in philosophy holds that in order to successfully...

Please sign up or login with your details

Forgot password? Click here to reset