Preference Change in Persuasive Robotics

06/21/2022
by   Matija Franklin, et al.
UCL
0

Human-robot interaction exerts influence towards the human, which often changes behavior. This article explores an externality of this changed behavior - preference change. It expands on previous work on preference change in AI systems. Specifically, this article will explore how a robot's adaptive behavior, personalized to the user, can exert influence through social interactions, that in turn change a user's preference. It argues that the risk of this is high given a robot's unique ability to influence behavior compared to other pervasive technologies. Persuasive Robotics thus runs the risk of being manipulative.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

03/20/2022

Recognising the importance of preference change: A call for a coordinated multidisciplinary research effort in the age of AI

As artificial intelligence becomes more powerful and a ubiquitous presen...
08/13/2020

Warmth and Competence to Predict Human Preference of Robot Behavior in Physical Human-Robot Interaction

A solid methodology to understand human perception and preferences in hu...
09/02/2014

Deontic modality based on preference

Deontic modalities are here defined in terms of the preference relation ...
03/23/2022

RILI: Robustly Influencing Latent Intent

When robots interact with human partners, often these partners change th...
08/03/2021

An Analysis of Human-Robot Information Streams to Inform Dynamic Autonomy Allocation

A dynamic autonomy allocation framework automatically shifts how much co...
07/21/2008

The NAO humanoid: a combination of performance and affordability

This article presents the design of the autonomous humanoid robot called...
12/28/2021

An AGM Approach to Revising Preferences

We look at preference change arising out of an interaction between two e...

I Introduction

Modern social robotics uses machine learning (ML) methods to learn user preferences in order to develop adaptive robot behavior which is tailored to the user. During human-robot interaction (HRI), robots can learn human preferences by inferring them through observing human behavior in various contexts and tasks

[1]. This approach of learning preferences through inference from behavior is known as Revealed Preference Theory. Robots can also learn preferences by asking users directly (e.g., providing a ranking) [2]. This is known as learning from Stated Preferences.

This article argues that any attempt to adapt a robot’s behavior to human preference needs to acknowledge that the robot can change human preference. This is due to the fact that although preference does influence behavior, behavior can predate and lead to the formation of new preference [3]. Certain forms of HRI thus run the risk of being manipulative if the Robot has some preference over human behaviour. It is not possible to ensure that HRI is transparent, ethical and safe without understanding the impact it has on preference. This article will review HRI’s influences on behavior, and concentrate on the problem of preference change.

Ii Uniqueness of Robot Influence

A key difference with robot influence separating it from other forms of pervasive technology is a robot’s physical embodiment; triggering aspects of human social cognition that are attuned to social influence [4]. The robot’s physical embodiment also allows it to collect rich interaction data that can be used to infer human intention and emotion [5]. Persuasive Robotics studies influence in HRI, specifically focusing on aspects of social interaction (both human-to-human, and human-to-robot) that significantly alter a robot’s influence [6].

Compared to other pervasive technologies, such as recommender systems or smart user interfaces, robots additionally influence through social interaction and social presence. Evidence suggests that people form different relationships with robots than they do to virtual avatars and computers. For example, people rate physical robots as more watchful and enjoyable [7]. People also empathize more with an embodied robot than a virtual robot when watching the robot experience pain [8]. Finally, a robot’s physical embodiment can produce arousing physiological reactions in users [9].

The particular relationship people have with robots, compared to other technologies, results in a greater behavior change. There is evidence that people are more likely to follow instructions from a robot than from a computer tablet due to a greater desire to interact with the robot [10]. Another study replicated this, finding that a greater preference towards a robot, compared to a computer, leads participants to interact with it for longer [11].

Sociocognitive factors that influence behaviour in human-to-human interaction, such as inter-group, intra-group and interpersonal factors, are also prominent in HRI. For example, most people after initially interacting with a humanoid social robot will perceive a greater social presence from it [12]. As with influence exerted by human groups, people will conform to a group of robots by changing their preliminary answers to match the robots’ answers [13]. Further, people will show more positive reactions towards an in-group robot versus an out-group robot as they will anthropomorphize it more [14].

Interpersonal factors and affect also impact human behavior in HRI. Interpersonal, robot-delivered interactions can be as effective as those delivered by humans [15]. People tend to rate a robot of the opposite sex as more trustworthy, credible, and engaging, with male participants being more likely to donate money to a female robot [6]. Further, touch, perceived autonomy, and interpersonal distance all have an impact on human behavior [16]. Finally, robots can influence human behavior with affective displays (e.g., conveying distress) [17].

Iii Adaptive Robots Change Human Preference

Social robot’s adaptive behavior, tailored to a user’s preference, changes the human user’s behavior [18]. Behavioral Science researches behavioral insights - cause and effect understanding of how different factors influence behavior - which has allowed it to build valid and reliable predictive models of behavior [19]. In human-robot interaction, behavioral insights are adjacent to human factors. Human-robot interaction with ML-powered systems whose design has been centered around human factors leads to consistent, predictable behavior change.

The practice of learning a user’s preference and adapting a social robot’s behavior, which in turn changes the behavior of the user, also changes the user’s preference. To understand why this is the case, it is important to note that preferences are not static, but rather quite changeable, and predicatively influenced by various factors [20, 21]. To give an example, one person’s preference can change between contexts due to pressure exerted by the social norms of their ’in-group’ [22]. The fact that a person can have multiple preferences in different contexts raises questions related to which one should be thought of as the ’true’ preference [23].

It is also important to note that although preference does influence behavior, behavior can predate and lead to the formation of new preference [3]. Adapting a social robot’s behavior to a user preference is not only a matter of preference learning; because the adapted robot behavior changes a user’s behavior, it also can and will change a user’s preferences. Previous work has explored the problem of behavior and preference manipulation in AI systems; specifically, how iterative ML systems tasked with learning user preferences over time, often impact the preferences they are changing, or worse manipulate them to serve their own objective function [24, 25]. We thus propose that a multidisciplinary endeavor should research how preference changes - Preference Science [20]. This includes factoring in the confounding factors that influence both preference and behavior. Future paradigms in HRI should explore the factors that can be highly manipulative over a user’s preference.

Iv Conclusion

This article aimed to outline an ethical issue pertaining to robot manipulation. Specifically, the embodied nature of robots makes HRI additionally influential compared to interaction with other pervasive technologies. Any change in behaviour induced by a robot results in the formation of new preferences. Robots that learn user preferences are thus likely to impact them. They can also manipulate preferences to suit their own objective function, by making people more predictable so as to more easily anticipate their wants and needs. Preference learning thus poses many challenges for developers aiming to design ethical systems for persuasive robotics.

References

  • [1]

    Woodworth, Bryce, Francesco Ferrari, Teofilo E. Zosa, and Laurel D. Riek. ”Preference learning in assistive robotics: Observational repeated inverse reinforcement learning.” In Machine Learning for Healthcare Conference, pp. 420-439. PMLR, 2018.

  • [2] Wilde, Nils, Dana Kulić, and Stephen L. Smith. ”Learning user preferences in robot motion planning through interaction.” In 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 619-626. IEEE, 2018.
  • [3] Ariely, Dan, and Michael I. Norton. ”How actions create–not just reveal–preferences.” Trends in cognitive sciences 12, no. 1 (2008): 13-16.
  • [4] Broadbent, Elizabeth. ”Interactions with robots: The truths we reveal about ourselves.” Annual review of psychology 68 (2017): 627-652.
  • [5] Sirithunge, Chapa, AG Buddhika P. Jayasekara, and D. P. Chandima. ”Proactive robots with the perception of nonverbal human behavior: A review.” IEEE Access 7 (2019): 77308-77327.
  • [6] Siegel, Mikey, Cynthia Breazeal, and Michael I. Norton. ”Persuasive robotics: The influence of robot gender on human behavior.” In 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2563-2568. IEEE, 2009.
  • [7] Wainer, Joshua, David J. Feil-Seifer, Dylan A. Shell, and Maja J. Mataric. ”The role of physical embodiment in human-robot interaction.” In ROMAN 2006-The 15th IEEE International Symposium on Robot and Human Interactive Communication, pp. 117-122. IEEE, 2006.
  • [8] Kwak, Sonya S., Yunkyung Kim, Eunho Kim, Christine Shin, and Kwangsu Cho. ”What makes people empathize with an emotional robot?: The impact of agency and physical embodiment on human empathy for a robot.” In 2013 IEEE RO-MAN, pp. 180-185. IEEE, 2013.
  • [9] Li, Jamy Jue, Wendy Ju, and Byron Reeves. ”Touching a mechanical body: tactile contact with body parts of a humanoid robot is physiologically arousing.” Journal of Human-Robot Interaction 6, no. 3 (2017): 118-130.
  • [10] Mann, Jordan A., Bruce A. MacDonald, I-Han Kuo, Xingyan Li, and Elizabeth Broadbent. ”People respond better to robots than computer tablets delivering healthcare instructions.” Computers in Human Behavior 43 (2015): 112-117.
  • [11] Kidd, Cory D., and Cynthia Breazeal. ”Robots at home: Understanding long-term human-robot interaction.” In 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3230-3235. IEEE, 2008.
  • [12] Edwards, Autumn, Chad Edwards, David Westerman, and Patric R. Spence. ”Initial expectations, interactions, and beyond with social robots.” Computers in Human Behavior 90 (2019): 308-314.
  • [13] Salomons, Nicole, Michael Van Der Linden, Sarah Strokhorb Sebo, and Brian Scassellati. ”Humans conform to robots: Disambiguating trust, truth, and conformity.” In 2018 13th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 187-195. IEEE, 2018.
  • [14] Häring, Markus, Dieta Kuchenbrandt, and Elisabeth André. ”Would you like to play with me? How robots’ group membership and task features influence human–robot interaction.” In 2014 9th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 9-16. IEEE, 2014.
  • [15] Robinson, Nicole L., Jennifer Connolly, Leanne Hides, and David J. Kavanagh. ”Social robots as treatment agents: Pilot randomized controlled trial to deliver a behavior change intervention.” Internet Interventions 21 (2020): 100320.
  • [16] Siegel, Michael Steven. ”Persuasive robotics: how robots change our minds.” PhD diss., Massachusetts Institute of Technology, 2008.
  • [17] Briggs, Gordon, and Matthias Scheutz. ”How robots can affect human behavior: Investigating the effects of robotic displays of protest and distress.” International Journal of Social Robotics 6, no. 3 (2014): 343-355.
  • [18] Saunderson, Shane, and Goldie Nejat. ”How robots influence humans: A survey of nonverbal communication in social human–robot interaction.” International Journal of Social Robotics 11, no. 4 (2019): 575-608.
  • [19] Michie, Susan, Maartje M. Van Stralen, and Robert West. ”The behaviour change wheel: a new method for characterising and designing behaviour change interventions.” Implementation science 6, no. 1 (2011): 1-12.
  • [20] Franklin, Matija, Ashton Hal, Gorman Rebbeca, and Armstrong Stuart. ”Recognising the importance of preference change: A call for a coordinated multidisciplinary research effort in the age of AI”. AAAI-22 Workshop on AI For Behavior Change, 2022
  • [21] Michie, Susan, Lou Atkins, and Robert West. ”The behaviour change wheel: a guide to designing interventions.” (2014): 146.
  • [22] Ruggeri, Kai, ed. Behavioral insights for public policy: concepts and cases. Routledge, 2018.
  • [23] Beshears, J., J. J. Choi, D. Laibson, and B. C. Madrian. ”HOW ARE PREFERENCES REVEALED? WORKING PAPER 13976.” NBER WORKING PAPER SERIES 13976 (2008).
  • [24]

    Ashton, Hal, and Matija Franklin. ”The problem of behaviour and preference manipulation in AI systems.” In The AAAI-22 Workshop on Artificial Intelligence Safety (SafeAI 2022). 2022.

  • [25] Everitt, Tom, Ryan Carey, Eric Langlois, Pedro A. Ortega, and Shane Legg. ”Agent incentives: A causal perspective.” arXiv preprint arXiv:2102.01685 (2021).