Log In Sign Up

Algorithms for the Greater Good! On Mental Modeling and Acceptable Symbiosis in Human-AI Collaboration

Effective collaboration between humans and AI-based systems requires effective modeling of the human in the loop, both in terms of the mental state as well as the physical capabilities of the latter. However, these models can also open up pathways for manipulating and exploiting the human in the hopes of achieving some greater good, especially when the intent or values of the AI and the human are not aligned or when they have an asymmetrical relationship with respect to knowledge or computation power. In fact, such behavior does not necessarily require any malicious intent but can rather be borne out of cooperative scenarios. It is also beyond simple misinterpretation of intents, as in the case of value alignment problems, and thus can be effectively engineered if desired. Such techniques already exist and pose several unresolved ethical and moral questions with regards to the design of autonomy. In this paper, we illustrate some of these issues in a teaming scenario and investigate how they are perceived by participants in a thought experiment.


page 3

page 4


Challenges of Human-Aware AI Systems

From its inception, AI has had a rather ambivalent relationship to human...

Identifying Ethical Issues in AI Partners in Human-AI Co-Creation

Human-AI co-creativity involves humans and AI collaborating on a shared ...

Human-AI Collaboration Enables More Empathic Conversations in Text-based Peer-to-Peer Mental Health Support

Advances in artificial intelligence (AI) are enabling systems that augme...

Understanding Mental Models of AI through Player-AI Interaction

Designing human-centered AI-driven applications require deep understandi...

Building Mental Models through Preview of Autopilot Behaviors

Effective human-vehicle collaboration requires an appropriate un-derstan...

Optimal Behavior Prior: Data-Efficient Human Models for Improved Human-AI Collaboration

AI agents designed to collaborate with people benefit from models that e...

Exploring Crowd Co-creation Scenarios for Sketches

As a first step towards studying the ability of human crowds and machine...


  • [Allan2013] Allan, K. 2013. What is common ground? In Perspectives on linguistic pragmatics.
  • [Annas2012] Annas, G. J. 2012. Doctors, patients, and lawyers—two centuries of health law. New England Journal of Medicine 367(5):445–450.
  • [Baker, Saxe, and Tenenbaum2011] Baker, C.; Saxe, R.; and Tenenbaum, J. 2011. Bayesian theory of mind: Modeling joint belief-desire attribution. In Proceedings of the Cognitive Science Society.
  • [Bird2017] Bird, S. 2017. Why AI must be redefined as ‘augmented intelligence’. Venture Beat.
  • [Boella et al.2009] Boella, G.; Broersen, J. M.; van der Torre, L. W.; and Villata, S. 2009. Representing excuses in social dependence networks. In AI*IA.
  • [Bok1999] Bok, S. 1999. Lying: Moral choice in public and private life. Vintage.
  • [Camerer2017] Camerer, C. F. 2017. Artificial intelligence and behavioral economics. In Economics of Artificial Intelligence.
  • [Cass2006] Cass, H. 2006. The NHS Experience: The” snakes and Ladders” Guide for Patients and Professionals. Psychology Press.
  • [Chakraborti et al.2015] Chakraborti, T.; Briggs, G.; Talamadupula, K.; Zhang, Y.; Scheutz, M.; Smith, D.; and Kambhampati, S. 2015. Planning for serendipity. In IROS.
  • [Chakraborti et al.2016] Chakraborti, T.; Talamadupula, K.; Zhang, Y.; and Kambhampati, S. 2016. A formal framework for studying interaction in human-robot societies. In AAAI Workshop: Symbiotic Cognitive Systems.
  • [Chakraborti et al.2017a] Chakraborti, T.; Kambhampati, S.; Scheutz, M.; and Zhang, Y. 2017a. AI challenges in human-robot cognitive teaming. CoRR abs/1707.04775.
  • [Chakraborti et al.2017b] Chakraborti, T.; Sreedharan, S.; Zhang, Y.; and Kambhampati, S. 2017b. Plan explanations as model reconciliation: Moving beyond explanation as soliloquy. In IJCAI.
  • [Christensen2016] Christensen, H. 2016. A roadmap for us robotics from internet to robotics, 2016 edn. Sponsored by National Science Foundation & University of California, San Diego.
  • [Converse, Cannon-Bowers, and Salas1993] Converse, S.; Cannon-Bowers, J.; and Salas, E. 1993. Shared mental models in expert team decision making. Individual and group decision making: Current.
  • [Cooke et al.2013] Cooke, N. J.; Gorman, J. C.; Myers, C. W.; and Duran, J. L. 2013. Interactive team cognition. Cognitive science.
  • [Ethics in Medicine2018] Ethics in Medicine. 2018. Truth-telling and Withholding Information. University of Washington.
  • [Fikes and Nilsson1971] Fikes, R. E., and Nilsson, N. J. 1971. Strips: A new approach to the application of theorem proving to problem solving. Artificial intelligence.
  • [Gorman, Cooke, and Winner2006] Gorman, J. C.; Cooke, N. J.; and Winner, J. L. 2006. Measuring team situation awareness in decentralized command and control environments. Ergonomics.
  • [Hadfield-Menell et al.2016] Hadfield-Menell, D.; Russell, S. J.; Abbeel, P.; and Dragan, A. 2016. Cooperative inverse reinforcement learning. In Advances in neural information processing systems (NIPS), 3909–3917.
  • [Hak et al.2000] Hak, T.; Koëter, G.; van der Wal, G.; et al. 2000. Collusion in doctor-patient communication about imminent death: an ethnographic study. Bmj 321(7273):1376–1381.
  • [Hippocrates2018] Hippocrates. 2018. The Hippocatic Oath – Full Text.
  • [Holmes1892] Holmes, O. W. 1892. Medical essays 1842-1882, volume 9. Houghton, Mifflin.
  • [Horvitz2007] Horvitz, E. J. 2007. Reflections on challenges and promises of mixed-initiative interaction. AI Magazine.
  • [Hume1907] Hume, D. 1907. Essays: Moral, political, and literary, volume 1. Longmans, Green, and Company.
  • [Kernberg1985] Kernberg, O. F. 1985. Borderline conditions and pathological narcissism. Rowman & Littlefield.
  • [Klein2008] Klein, G. 2008. Naturalistic decision making. Human factors.
  • [Lake et al.] Lake, B. M.; Ullman, T. D.; Tenenbaum, J. B.; and Gershman, S. J. Building machines that learn and think like people. Behavioral and Brain Sciences.
  • [Leverhulme Centre2017] Leverhulme Centre. 2017. Value alignment problem. Leverhulme Centre for the Future of Intelligence.
  • [Malle2004] Malle, B. F. 2004. How the mind explains behavior. Folk Explanation, Meaning and Social Interaction. Massachusetts: MIT-Press.
  • [Mathieu et al.2000] Mathieu, J. E.; Heffner, T. S.; Goodwin, G. F.; Salas, E.; and Cannon-Bowers, J. A. 2000. The influence of shared mental models on team process and performance. Journal of applied psychology.
  • [MIT2017] MIT. 2017. Moral Machines.
  • [Network World2017] Network World. 2017. AI should enhance, not replace, humans, say CEOs of IBM and Microsoft. Network World.
  • [Palmieri and Stern2009] Palmieri, J. J., and Stern, T. A. 2009. Lies in the doctor-patient relationship. Primary care companion to the Journal of clinical psychiatry.
  • [Partnership of AI (PAI)2017] Partnership of AI (PAI). 2017. Thematic Pillar – Collaborations between People and AI Systems.
  • [Rachael Rettner2009] Rachael Rettner. 2009. Why Are Human Brains So Big? Live Science.
  • [Rao, Georgeff, and others1995] Rao, A. S.; Georgeff, M. P.; et al. 1995. Bdi agents: From theory to practice. In ICMAS.
  • [Swaminath2008] Swaminath, G. 2008. The doctor’s dilemma: Truth telling. Indian journal of psychiatry 50(2):83.
  • [Thomasma1994] Thomasma, D. C. 1994. Telling the truth to patients: a clinical ethics exploration. Cambridge Quarterly of Healthcare Ethics 3(3):375–382.
  • [Tom Gruber2017] Tom Gruber. 2017. How AI can enhance our memory, work and social lives. TED Talk.
  • [van Ditmarsch2014] van Ditmarsch, H. 2014. The ditmarsch tale of wonders. In KI: Advances in Artificial Intelligence.
  • [Veloso et al.2015] Veloso, M. M.; Biswas, J.; Coltin, B.; and Rosenthal, S. 2015. Cobots: Robust symbiotic autonomous mobile service robots. In IJCAI.
  • [Yoshida, Dolan, and Friston2008] Yoshida, W.; Dolan, R. J.; and Friston, K. J. 2008. Game theory of mind. PLoS Computational Biology.