Repairing Human Trust by Promptly Correcting Robot Mistakes with An Attention Transfer Model

03/14/2021
by   Ruijiao Luo, et al.
0

In human-robot collaboration (HRC), human trust in the robot is the human expectation that a robot executes tasks with desired performance. A higher-level trust increases the willingness of a human operator to assign tasks, share plans, and reduce the interruption during robot executions, thereby facilitating human-robot integration both physically and mentally. However, due to real-world disturbances, robots inevitably make mistakes, decreasing human trust and further influencing collaboration. Trust is fragile and trust loss is triggered easily when robots show incapability of task executions, making the trust maintenance challenging. To maintain human trust, in this research, a trust repair framework is developed based on a human-to-robot attention transfer (H2R-AT) model and a user trust study. The rationale of this framework is that a prompt mistake correction restores human trust. With H2R-AT, a robot localizes human verbal concerns and makes prompt mistake corrections to avoid task failures in an early stage and to finally improve human trust. User trust study measures trust status before and after the behavior corrections to quantify the trust loss. Robot experiments were designed to cover four typical mistakes, wrong action, wrong region, wrong pose, and wrong spatial relation, validated the accuracy of H2R-AT in robot behavior corrections; a user trust study with 252 participants was conducted, and the changes in trust levels before and after corrections were evaluated. The effectiveness of the human trust repairing was evaluated by the mistake correction accuracy and the trust improvement.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/12/2018

Planning with Trust for Human-Robot Collaboration

Trust is essential for human-robot collaboration and user adoption of au...
research
04/07/2021

Synthesized Trust Learning from Limited Human Feedback for Human-Load-Reduced Multi-Robot Deployments

Human multi-robot system (MRS) collaboration is demonstrating potentials...
research
02/11/2020

Human-to-Robot Attention Transfer for Robot Execution Failure Avoidance Using Stacked Neural Networks

Due to world dynamics and hardware uncertainty, robots inevitably fail i...
research
02/18/2020

Trust Repairing for Human-Swarm Cooperation inDynamic Task Response

Emergency happens in human-UAV cooperation, such as criminal activity tr...
research
04/22/2021

Trust as Extended Control: Active Inference and User Feedback During Human-Robot Collaboration

To interact seamlessly with robots, users must infer the causes of a rob...
research
08/02/2023

Reward Shaping for Building Trustworthy Robots in Sequential Human-Robot Interaction

Trust-aware human-robot interaction (HRI) has received increasing resear...
research
07/05/2018

The Transfer of Human Trust in Robot Capabilities across Tasks

Trust is crucial in shaping human interactions with one another and with...

Please sign up or login with your details

Forgot password? Click here to reset